Analysing big data stored on a cluster is not easy. Spark allows you to do so much more than just MapReduce. Rebecca Tickle takes us through some code.
This video was filmed and edited by Sean Riley.
Computer Science at the University of Nottingham: https://bit.ly/nottscomputer
Computerphile is a sister project to Brady Haran’s Numberphile. More at http://www.bradyharan.com
47 replies on “Apache Spark – Computerphile”
This was very helpful
Great explanations. Of course there are many things going on behind the scenes, but good overview.
content is nice, well explained.
the camera and editor are so bad.
We are not here for a documentary, the computer shot from her shoulder is completely useless and distracting, if you want to use your cuts, use something like the picture in picture but please let us focus on the code!!
What is the architectural difference between spark and map reduce ?
Wow congrats on the content. You were able to explain it in a concise, yet logical and detailed way. nice
It's so clear and easy after the explanation! I will be waiting for more vids about clustering and distributed computing)
I wish she also talked a little about Spark's ability to deal with data streams
I really love your videos I would like to know if it is possible to watch them in French or at least with subtitles so that we can follow
Sorry for redundancy, just verifying my understanding. Do I understand it correctly that (when running this example in a cluster) collect runs the 'reduceByKey' against the results on each node, and then reduces to a final result. Say on Node 1 I have count of word 'something' = 5 , on Node 2 I have count of word 'something' = 3, then collect combines from those two nodes into a count of 'something' = 8, And so on…?
Is there any meta analysis on the usefulness of bigdata analysis? How often do jobs get run that either produce no meaningful data or don't produce any statistically significant data?
@3:16 line 12 is wrong. Great review 👍 otherwise!
Would have liked it to be a bit more in-depth and technical, was too high level.
Please show some drawings or animations of data going back and forth between the noded.
Please give time measurements comparing single node with multi node execution. What is the overhead?
Thanks, nice vid.
She's mumbling in the beginning… can't really hear her (American-born English speaker)
What programming language is she using??
typo in line 32 for using `splitLines` instead of `word`?
its bit silly but i cant understand 100% because english isnt my first language , hope someone could add english subs on every this channel videos because i found computerphile videos are easy to understanding because excellent explanation
Looks like you could do a search engine in that.
These data ones are really good! Keep them coming!
Thank you for teaching an old man new things.
More like this!!!!!!
woohooo rebecca is back
Was so excited to see this posted 🙂 I'm a Cassandra professional.
More of these, please. More big data.
She's damn good at explaining and easy to listen to, any plans of having her host other episodes?
(sorry for "her" I don't know her name).
The RDD API is outmoded as of Spark 2.0 and in almost every use case you should be using the Dataset API. You lose out on a lot of improvements and optimizations using RDDs instead of Datasets.
A great example of how programming languages are a reasonably efficient mechanism to communicate sections of program and how natural language really is not.
Apache Flink next please
ahh.. so refreshing after taking a week break from dev work and staying away from non dev topics. Lol, I love our field. Like music to my ears
For anyone interested, although the documentation is awful for Apache Flink and it doesn't support Java versions beyond 8, it at least lets you do setup on each node. Spark does not have any functionality for running one-time setup on each node, which makes it infeasible for many use cases. These distributed processing frameworks are quite opinionated and if you're not doing word count or streaming data from one input stream to another with very simple stateless transformations in between you'll find little in the documentation or functionality. They're not really designed for use cases where you have a parallel program with a fixed size data source known in advance and want to scale it up as you would by adding more threads, but more for continuous data processing.
I study bioinformatics handling txt files many gigabytes in size and this could be so handy
The first time I learned about Apache Spark, I was looking up documentation for another framework named Spark.
really good summary thankyou!
totally lost me 3 min into this video.
Do a video explaining AES!
feels like this video is four years too late … :-/
Thank you so much. This was an incredible explanation
Ohhh, she is using VSCode! I love VS Code 😀
Really interesting video! I have done some MapReduce before, but I haven’t came across Apache Spark
Good old Scala.
Where are the extra bits?
note to the editor: please stop cutting away from the code so quickly. we're trying to follow along in the code based on what she's saying. at that moment, we don't need to cut back to the shot of her face. we can still hear her voice in the voiceover.
Can you do Apache Kafka next? How do they compare?
Good video 🙂
pretty sure theres a typo in that code. "splitLines" doesnt exist and is probably supposed to be words.map(…) instead