Intelligence in applications and platforms to drive rapid, objective decisions

We are all data natives. But that’s looking at us as consumers of products and services.

Tools that process data where it resides without moving it around unnecessarily have also been described as being data native. Being able to co-exist compute with data is a desirable architectural principle, especially given large volumes. However, that capability is one of many, and not the only one, that defines what it means to be data-native.

An application or process is data-native when it has been built from the ground up with data as the primary design principle. Like with other design principles, it is necessary to consciously consider how data can add value in every aspect of the application or the process. In many cases, it is necessary to first collect data before it can add value, so instrumentation becomes a design consideration as well. Lastly, the ability for data to add value comes down to how the application or the process is going to use insights, that is, the actions to take.

Why Data-native?

For far too long, the promise of big data has been hyped, lost and recovered again (most recently, by riding AI wave) but hasn’t delivered expected success. Much of the angst stems from not putting to practical use the vast troves of data being collected. A common reason for why this happens is traceable down to business applications and processes not consuming the insights derived from data. That itself occurs due to a host of factors including dependence on legacy environments that are technically complex to change, adherence to institutional processes that are loathe to be changed, and lack of actionable insights (i.e. prescriptive outputs).

Data-native as a design principle, addresses these challenges holistically in a way that naturally creates a virtuous cycle of continuous value creation. This cycle is one of constant data generation, collection, refinement, integration, insights extraction and action-taking. It is indeed hard for legacy applications to be data-native without radical changes in code, so it becomes necessary to think of these systems afresh. However, once engineered to be data-native, the application becomes intelligent enough to adapt by observing, processing and reacting or even proactively intervening by itself. Processes become self-correcting or self-optimizing without having change requests, business requirement documents and delay-inducing cross-team collaboration.

For the business, data-native applications and processes can respond to changing behavior patterns and business climate quickly allowing the business to remain competitive and grow. Ability to create new business models in a blitzkrieg speed, developing new solutions and services become more easily achievable.

What is Data-Native?

An application or process is data-native if it is engineered to provide solutions using data as the primary design principle. These applications and processes are capable of taking advantage of the data they generate, often combined with other data from the context, to inform and evolve their own behavior intelligently. A successful data-native implementation draws upon the wisdom accumulated over the years in the fields of Data Science and Software Engineering, coupled with the advancements in AI, merged in the context of enterprise architectures. The paradigm fosters minimal manual intervention in decision-making and allows the business to focus on its core competency.

Informally, data-native is best explained by drawing on the analogy of cloud-native.

Borrowing from various sources, cloud-native is a paradigm for building applications that are “resilient, manageable, and observable” resulting in benefits of speed, scale and low cost of risk for the business. None of the modern-day tech-driven companies would subscribe to the view that simply moving workloads to a cloud service provider is being cloud-native.

Likewise, none of the modern-day data-driven companies would subscribe to the view that simply doing data science is the same as being data-native. In building applications that are aware of the context in which they operate, using that intelligence to better their own functioning as well as to guide the functioning of machines, applications and humans around them, the organization benefits from being able to do the right thing at the right time. Vague though this may sound when described at this level, the implications of engineering systems to adhere to this design paradigm are profound. In a lot of ways, the promise of AI to represent AGI is related to this potential for reimagining applications, but I believe “AI” or “AGI” sounds even more vague!

A related concept to data-native is Software 2.0. Though this field is still emerging, the ideas being discussed identify the neural networks themselves as being the authors of the code “done by accumulating, massaging and cleaning datasets”. The subtext is that networks are much better than humans at generalizing patterns given large datasets and can be trained to achieve certain goals. Whereas, the data-native paradigm is broader and stems from the observation that all applications and processes operate in a business context and that context informs the actions as well as the way the data is to be processed. For some tasks, the context is specific and narrow enough to have Software 2.0 methods be well suited. On the other hand, there are many tasks for which business rules and hand-engineered contexts in the form of features and interactions are necessary to allow the systems to learn and adapt, especially across boundaries.

In the next post, we look at the characteristics of the data-native design principle. In the meanwhile, check out our pioneering methodology for engineering data-native applications and processes called RoboticDataScience (RDS)

About the author:

Rangarajan Vasudevan, CEO of TheDataTeam, is an applied data science professional with extensive consulting experience on massive scale data across industries.

Leave a Reply

Close Bitnami banner
Bitnami