Data Science with Spark and Hadoop
This Video Lecture briefly introduces the use of Apache Spark and Hadoop for Data Science applications.
This Video Lecture briefly introduces the use of Apache Spark and Hadoop for Data Science applications.
Prof. Dr. Jens Lehmann, Lead Scientist for Conversational AI and Knowledge Graphs at Fraunhofer IAIS, presented an overview on current Conversational AI research.
Big Data technologies are often used in domains where data is generated, stored and processes with rates that cannot be efficiently processed by one computer. One of those domains is definitely the domain of energy. Here, the processes of energy generation, transmission, distribution and use have to be concurrently monitored and analyzed in order to assure system stability without brownouts or blackouts. The transmission systems (grids) that transport electric energy are in general very large and robust infrastructures that are accompanied with an abundance of monitoring equipment.
This module will discuss the topic of extraction for Knowledge Graphs. We will focus on web data extraction in this module. Web data extraction is essential to make information available on the web accessible and usable by Knowledge Graphs. We provide a thorough introduction to the topic. This will feature both Oxford’s Vadalog and OXPath systems.
Download paper.
Knowledge Graphs (KGs) are one of the key trends among the next wave of technologies. Many defnitions exist of what a Knowledge Graph is, and in this chapter, we are going to take the position that precisely in the multitude of definitions lies one of the strengths of the area. We will choose a particular perspective, which we will call the layered perspective and three views on Knowledge Graphs:
This module will cover the setup, APIs and different layers of SANSA. At the end of this module, the audience will be able to execute examples and create programs that use SANSA APIs. The final part of this lecture is planned to be an interactive session to wrap up the introduced concepts and present attendees some open research questions which are nowadays studied by the community.
This module will cover the needs and challenges of distributed analytics and then dive into the details of scalable semantic analytics stack (SANSA) used to perform scalable analytics for knowledge graphs. It will cover different SANSA layers and the underlying principles to achieve scalability for knowledge graph processing.
Please, download from the following link.
In the practical level, the Big Data frameworks use different APIs for graph computations and graph processing. In this lecture, the important libraries built on top of Apache Spark will be covered. These include SparkSQL, GraphX and MLlib. The audience will learn to build scalable algorithms in Spark using Scala.
Please, downoloadfrom the following link.
The “processing frameworks” are one of the most essential components of a Big Data systems. There are three categories of such frameworks namely: Batch-only frameworks (Hadoop), Stream-only frameworks (Storm, Samza), and Hybrid frameworks (Spark, Hive and Flink). In this lecture, we will introduce them and cover one of the major Big Data frameworks, Apache Spark. We will cover Spark fundamentals and the model of “Resilient Distributed Datasets (RDDs)” that are used in Spark to implement in-memory batch computation.