Microservice Data Pipeline product

GRAVITY

Based on DataMesh architecture
One-stop data product self-service platform

Description Buy Now

Data silos and data fragmentation are the biggest obstacles to the journey of a company's digital transformation.

With the popularization of cloud technology, more and more companies, whether active or passive, have embarked on a journey of digital transformation. In this wave of information revolution, microservices have become an indicator. The so-called digital transformation is the process of cloudification and microservices. However, the problem of "data decoupling and fragmentation management" under the microservice architecture has become more prominent with the deepening of microservices. At the same time, due to long-term corporate culture and technical architecture, many data islands have been created, which has seriously hindered the progress of digital transformation.

Poor business experience

The scattered and fragmented data makes it extremely difficult to improve the business process, resulting in poor customer experience and difficult business expansion.

Hinder business decisions

The credibility and quality of the data have declined, which seriously hinders the presentation of the value of the data and makes it difficult to make accurate decisions.

Repeated cost input

Existing data is not easy to reuse, leading to repeated IT investment and increased application infrastructure investment.

Slow down the business process

Repeated and inefficient ETL operations consume a lot of manpower and man-hours, resulting in ineffective business construction investment.

Instant CDC synchronization

Capturing changes in the database of the data source and update to the destination database within milliseconds.

Multi-type data source and destination database synchronization

Including Oracle, MySQL, SQLServer, PostgreSQL, Sybase, DB2, MongoDB, Excel, CSV, XML, etc.

Relational database to JSON

Automatically map the relational model to the JSON format during the data copy process.

Optimize data quality

Standardize data rules, while monitoring and analyzing the quality of data obtained.

API design and management

Provide load balancing, disaster recovery, API usage and traffic management.

Multiple types of data

Support structured, semi-structured and unstructured data.

Digital transformation challenges

When enterprises are transforming their micro-service architecture, they encounter data scheduling problems, and they have been unable to achieve real micro-service landing.

  • Microservices (fine-grained applications) put additional access pressure on traditional core databases.
  • Data supply cannot keep up with the development of business applications, the iteration of heterogeneous data sources lacks timeliness, data sources and supply docking are expensive, deployment and integration costs are too high and lack of flexibility.
  • The container only achieved application decoupling, but failed to achieve Database Per Service.

Solutions

To solve the data supply problem of the microservice architecture, thereby improving the overall data supply efficiency, and achieving flexible scheduling of data platform.

  • Use Data Mesh to construct a flexible data supply chain, and realize distributed deployment and management, accurately filter data and provide caching services, reducing the access pressure of core database.
  • Dynamically provide database APIs to simplify the design and deployment of data pipelines, allowing database administrators to provide data to application developers more agilely, solving long-standing data supply problems.
  • Real-time update of data sources in an event-driven manner, and dynamically publish to the relay database closest to the application.

Gravity application scenarios

Gravity's use case

Legacy system data interface

  • Perform aggregation and relational works in parallel, amplify the data supply capacity of the legacy systems with high throughput.
  • The data outcomes are prepared before the application execution, and can be used immediately on demand.
  • The application requests read data directly from the logical processed data product.
  • Supported by the cache mechanism, supplying multiple applications at the same time, without impacting the data source database.

Batch processing efficiency

  • The data is sent and processed on an average basis, avoiding the major performance impact that occurs during traditional data retrieval operations.
  • The data source can be read once to achieve multiple releases.
  • The cache mechanism supplies multiple applications at the same time, and will not impact the data source database with high throughput.
  • Dynamic API simplifies data pipeline design and deployment, allowing database managers to provide data to application developers more agilely.

Multi-source processing

  • Provide data according to the needs of application services.
  • Support database or non-database, structured data or unstructured data.
  • Heterogeneous data aggregation processing, data types can be transformed according to consumer needs.
  • Support multi-stage logic processing to replace traditional ETL/ELT operations.
  • Low latency, synchronizing data at near real-time speed.

Multi-cloud hybrid

  • Support local, cloud, and multi-cloud architectures concurrently.
  • Once the data source being released, the cache mechanism can be used to supply multiple remote applications at the same time, saving a lot of bandwidth.
  • The latency is greatly reduced, data can be synchronized to remote ends at a near real-time speed.
  • Filtering specific information according to different regional laws and regulations.

REQUEST

Please enter your name
Please enter your contact number or Line ID
Please enter your question or requirement!