Mario is a senior principal software engineer at Red Hat working as Drools project lead. He has a huge experience as Java developer having been involved in (and often leading) many enterprise level projects in several industries ranging from media companies to the financial sector. Among his interests there are also functional programming and Domain Specific Languages. By leveraging these 2 passions he created the open source library lambdaj with the purposes of providing an internal Java DSL for manipulating collections and allowing a bit of functional programming in Java. He is also a Java Champion, the JUG Milano coordinator a frequent speaker and the co-author of "Modern Java in Action" published by Manning.
What AI can do nowadays is simply mind-blowing. I must admit that I cannot stop being surprised and sometimes literally jumping from my seat thinking: "I didn't imagine that AI could ALSO do this!". What is a bit misleading here is that what we tend to identify with Artificial Intelligence is actually Machine Learning which is only a subset of all AI technologies available: ML is a fraction of the whole AI-story, while Symbolic Artificial Intelligence enables experts to encode their knowledge of a specific domain through a set of human-readable and transparent rules.
In fact there are many situations where being surprised is the last thing that you may want. You don't want to jump from your seat when your bank refuses your mortgage without any human understandable reason, but only because AI said no. And even the bank may want to grant their mortgages only to applicants who are considered viable under their strict and well-defined business rules.
Given these premises why not mixing 2 very different and complementary AI branches like Machine Learning and Symbolic Reasoning? During this talk we will demonstrate with practical examples why this could be a winning architectural choice in many common situations and how Quarkus through its langchain4j and drools extensions makes the development of applications integrating those technologies straightforward.
How many times have you implemented a clever performance improvement, and maybe put it in production, because it seemed the right thing™ to do, without even measuring the actual consequences of your change? And even if you are measuring, are you using the right tools and interpreting the results correctly? During this deep dive session we will use some examples, taken from real-world situations, to demonstrate how to develop meaningful benchmarks, avoiding the most common, but also often subtle, possible pitfalls and how to correctly interpret their results and taking actions to improve them. In particular we will illustrate how to use JMH for these purposes, explaining why it is the only reliable tool to be used when benchmarking Java applications, and showing what can go horribly wrong if you decide to measure the actual performance of a Java program without it. At the end of this session you will be able to create your own JMH based benchmarks and more important to effectively use their results in order to improve the overall performance of your software.
Searching for speaker images...