We are living through the beginning of the Data Revolution.

A period when the number of data sources and data volumes is exploding. A time when it is essential to make certain that your businesses data is analytics ready at the start of the data pipeline, as opposed to random points where business users may need it.


Every day we encounter businesses with disconnected data, fractured collections from multiple sources.  New and unused data that could make a significant difference to business performance.  Unfortunately, most of the businesses are taking a traditional or neo-traditional approach, being advised by people and organisations who lack the vision to realise that there are better and more innovative ways to solve the data issues that they face.  They are comfortable to fall back on what they understand, rather than looking forward and challenging the status quo.

By failing to address theses approaches to data, these businesses are making critical decisions based on incomplete or inaccurate data.  Armed with traditional business intelligence tools and big data advice the businesses are finding that there is a product, organisational and process gap that they are struggling to fill.  How to make sure that the right people get the right data at the right time.


The challenges extend from the data silos, with multiple data sources, where competitive advantage is often gained.  Through connectivity, creating repeatable processes and on into the data delivery, where the data scientists seem to spend 80-90% of their time preparing data (this sounds more like the role of a data janitor). Analytics, governance and security round out the challenges for which a strong methodology is required.


Imagine that there was a different approach to the data challenges, one which provided a sound methodology supported by a range of capabilities that filled the gaps in the existing methods.  Starting with the ability to automatically discover all sources.  Join all of these disparate data sources together, while leaving the data in the underlying store, but storing the knowledge of these sources as usable metadata.  Imagine that this delivered an automated link and relationship discovery, enabling the business to extend their knowledge of the data sources and removing the need for an ETL process.  Going further, the automated schema discovery and the creation of data lineage and pedigree provide the business with accurate insight into their data.  It also easily allows for the introduction of new data sources with limited effort.

The methodology comes back into play when the data managers, scientists and so forth, start to build Data Objects.  Data Objects are rule sets that are configured, using the previously stored metadata, to create building blocks that can be combined to provide efficient and effective access to the data.  The Data Objects remove the need for programming, coding or development, by automatically generating the optimised code required for the execution of a request for data.   This is seen to reduce development effort by up to 90%.

Visualising the data or providing access to the data for other processes and systems is achieved by connecting to the Data Objects via the JDBC, ODBC or Web API layer.  Enabling the business to plug in their existing tools and systems to access the data in a much more effective way than was previously possible.

The solution goes further to help manage access to the data by providing an Intelligent Data Lake, which can be physical or virtual.  Useful when data is sitting in production systems or databases that do not perform effectively.  The metadata methodology combined with Data Objects and an ‘everything is real-time philosophy’ enables the business to have an up to the second view of critical data.

The whole process is supported by a governance and security policy that supports the businesses approach to data, ensuring that only the right people have access to the right data at the right time.


Real-time analysis and access to all available data, with a reduction of up to 90% of the traditional effort, allows business to capitalise on all data, regardless of location or data silo.  Huge savings in time and effort role up to immediate financial impact.  The value of an end-to-end data management solution is an essential part of the analytics process.  Often said to be “too good to be true” or appearing to be “the holy grail of data management”, the fraXses platform delivers these capabilities out of the box.  We have seen the positive impact the metadata methodology has made on different global businesses, how the power of the Data Objects has delivered results in hours rather than months and how businesses have saved millions by adopting a better approach.

For a more detailed insight, please feel free to get in touch and challenge us to demonstrate how this can all be possible.