Relational databases are great. Great for what they are built for – consistently managing complex data. This requires splitting an object, e.g. the “Patient” or the “Customer”, into atomic entities and storing different parts in different tables. Once distributed across many tables, however, an object is hard to analyze “as a whole” – and that is where today’s databases and analytical technologies fall short.
Xplain organizes data in a different, “object-centric” way and provides access to objects as a whole. You can define operations on those objects and iterate over them. To really reap from the benefits you can use the Object-Map-Reduce interface to execute an operation massively parallel on millions of stored object instances.
Algorithms previously painful to implement are now easy to apply. Novel algorithms – previously simply unimaginable – are becoming feasible.
Object Analytics will propel the field of Data Analytics and Artificial Intelligence into novel orbits.
Not just predict – but shape the future: The question behind predictive modelling is: “What is the probability for a future event?” Often, however, the more important question is: “Which actions will change the probability for future events?”. If this applies in your case, then Causal Inference Models is what you need.
Missing information leads to flawed conclusions on “causal” effects – and this also holds if you can analyze just parts of a complex object at a time. Xplain’s holistic Object Analytics therefore constitutes novel opportunities to uncover potential “cause and effect” relationships – still no proof without experiment – but we can segregate away myriads of meaningless correlations and help you to quickly get to the core.
Knowing causal dependencies means being able to influence a system – a major step towards intelligent systems in real-world environments.
The “Object Explorer” is Xplain’s web-based frontend to view and analyze objects statistically.
This frontend allows you to analyze objects “as a whole” across all available data streams – instead of keyhole views which result from classical DWH approaches and replication of data into “Star schemas” or “OLAP cubes”. No need for experts to coerce data into those constraint analytical schemas, and with that you bring analytics from the ivory tower into your daily business: Interactively follow your train of thoughts from questions to follow-up questions and – supported by predictive models – discover potential “cause and effect” relationships.
Hook this engine on top of a data source (e.g. a relational database) – and an “object-centric” view to your data is quickly built. Different interfaces then allow you to analytically work with data represented as objects:
Our REST-based interface enables you to code in whatever language you prefer: R, R-Shiny, Java, PHP, C#, …! Use our generic Web API to connect your application to an Xplain Data backend and query your data.
Define an operation on an object and execute it massively parallel on millions of stored object instances. With this interface, you can inject algorithms deeply into the core engine of the database. Algorithms come to data instead the data to algorithms: no moving tons of data, no expensively transforming data into constraint formats till they can be sent to and processed by an algorithm.
The buzz about Artificial Intelligence is ubiquitous, but there is no talk about causality …
… how can a system act intelligent to achieve a goal without a notion of cause and effect?
We help early adopter customers to understand their data in terms of potential causal relationships. We introduce smart concepts of causality into the world of Machine Learning.
Statistical and machine learning algorithms need to cast data into constraint analytical schemas, typically a flat table …
… while real world data is much more complex than that.
We imagine algorithms that will process any complex “objects” as they are, and live in a real world instead of an artificially prepared analytics environment.
...soon – with no need for experts to pre-structure data – intelligent algorithms will be digging autonomously through complex and constantly changing data environments. They will detect likely “cause and effect” relationships, and – based on that knowledge – take best actions or assist experts to achieve desired outcome.
“Each new idea passes through three stages. First, people will ridicule it. Second, it is violently opposed. Finally, it will be considered self-evident.” - Schopenhauer (1788 - 1860)
Xplain Data is a 100% privately-owned and self-funded start-up company. In 2015, we have set off as a small team to develop some groundbreaking innovations in the context of Big Data and Artificial Intelligence. The “Object Analytics” paradigm emerged for that – a novel concept how to analytically work with entire objects – and based on that our unique approach for “Causal Inference”.
Innovation requires entrepreneurship – and an entrepreneurial cooperation model with early adopter customers. We are looking for visionary customers and partners who want to bring leading edge intelligence into their portfolio. We offer novel ways of cooperation such as our co-innovation model, which – instead of pay for service – shares risk and reward.
If you want to have a fresh view on analytics and if you want to get some new ideas on how to combine the strength of established companies with that of a small, agile start-up, please feel free to contact us.
has a PhD in Theoretical Physics and Neuroinformatics and more than 20 years' experience in developing analytics technologies at major companies like Siemens, Accenture and SAP. Before founding Xplain Data he worked as Chief Architect at SAP with a focus on Big Data Analytics. Earlier in his career, he co-founded a startup where he was responsible for the entire product lifecycle of analytics innovations. Michael gathered broad and unique knowledge spanning from database and BI-technologies to Statistics, Mathematics and Machine Learning. From numerous projects he knows how to apply those technologies in a business context.
holds a diploma in computer science and got a PhD for a thesis on structure searching in protein databases. He worked as a substitute professor of theoretical computer science at the Chair for Efficient Algorithms. For several years, Hanjo taught at TU Munich on fundamentals of algorithms and data structures. He also gave master level lectures on computational biology and advanced network and graph algorithms. Hanjo is an expert in algorithms and data structures. In particular, he is a specialist for bioinformatics and graph/network-related problems.
holds a PhD in applied computer science for developing algorithms to deal with structural changes in Data Warehouses. He worked for more than 25 years in the Life Science and Health Care industry, working on projects for different hospitals, insurance companies and pharmaceutical companies. Christian has strong hands on experience with different Business Intelligence tools, database management systems and many programming languages and frameworks. During the last decade he focused on the development of different web applications.