Artificial Intelligence (AI) is the pinnacle of digitalization; AI is revolutionizing how we work and live. We now have more data than ever about our business processes, and Deep Learning, in particular, gives us the tools to create real value from our data. With AI, we can improve our processes' efficiency; we can improve quality or do completely new things by creating new business models. With AI, computers can understand audio and text in natural language, allowing them to create new user experiences.
With the aim of improving ecological interest, the share of renewable energy sources (RES) in energy production has to be increased. Nonetheless, that growth adversely influences the grid’s instability, as a result of the dependency between the RES production and weather conditions.
This introductory lecture discusses the Big Data processing pipeline and the Big Data Landscape from the following perspectives
- Big Data Frameworks
- NoSQL Platforms and Knowledge Graphs
- Stream Processing Data Engines
- Big Data Preprocessing
- Big Data Analytics
- Big Data Visualization Tools.
Big Data Analytics is a crucial component of the Big data paradigm and refers to the process of extracting useful knowledge from large datasets or streams of data. Due to enormity, high dimensionality, heterogeneous, and distributed nature of data, traditional techniques of data mining may be unsuitable to work with big data.
Specific intrusion detection systems (IDSs) are needed to secure modern supervisory control and data acquisition (SCADA) systems due to their architecture, stringent real-time requirements, network traffic features and specific application layer protocols. This lecture aims to contribute to assess the state-of-the-art, identify the open issues and provide an insight for future study areas. To achieve these objectives, we start from the factors that impact the design of dedicated intrusion detection systems in SCADA networks and focus on network-based IDS solutions.
The increasing availability of scholarly metadata in the form of Knowledge Graphs (KG) offers opportunities for studying the structure of scholarly communication and the evolution of science. Such KGs build the foundation for knowledge-driven tasks e.g., link discovery, prediction and entity classification which allows to provide recommendation services. Knowledge graph embedding (KGE) models have been investigated for such knowledge-driven tasks in different application domains.
Over the past years, there has been a resurgence of Datalog-based systems in the database community as well as in industry. In this context, it has been recognized that to handle the complex knowledge-based scenarios encountered today, such as reasoning over large knowledge graphs, Datalog has to be extended with features such as existential quantification. Yet, Datalog-based reasoning in the presence of existential quantification is in general undecidable.
Mikhail Galkin (Fraunhofer IAIS) Lecture
The rapid development of digital technologies, IoT products and connectivity platforms, social networking applications, video, audio and geolocation services has created opportunities for collecting/accumulating a large amount of data. While in the past corporations used to deal with static, centrally stored data collected from various sources, with the birth of the web and cloud services, cloud computing is rapidly overtaking the traditional in-house system as a reliable, scalable and cost-efective IT solution.
Although each government in Europe with their public administration services can be treated as a big data ecosystem, the opportunities of interconnecting, integrating and processing the data on EU level presents a real challenge nowadays. Discussions on public benefit of integrating and opening the data can be found in our previous work, where we examine the use of Linked Data Approach in European e-Government Systems.