Loome Connection
What is Apache Spark SQL Data Warehouse?
Apache Spark SQL is a a module for Apache Spark which specialises in processing structured databases, handling SQL queries and making them work with the Spark distributed databases. It simplifies interaction with structured data, applying a level of abstraction which allows it to be treated in a similar way to a database of relational tables.
Extract Data From Spark SQL
Loome makes it simple to connect to Spark SQL and extract data for downstream systems such as an Integration Hub, Reporting Data Store, Data Lake or Enterprise Data Warehouse. In-built features allow bulk selection of all source tables/files to be automatically synced on a regular schedule, minimising data load size leveraging incremental logic.
Load Data into Spark SQL
Loome allows you to quickly onboard your data to the cloud and load your entire data source into Spark SQL to support your Data Integration Hub, Reporting Data Store, Data Lake or Enterprise Data Warehouse.
Loome automatically generates target staging table structures, including automatic change detection to retain a full audit history of each version of a record received.
Natively Orchestrate Spark SQL Integration Tasks
Loome allows orchestration of data pipelines across data engineering, data science and high performance computing workloads with native integration of Spark SQL data pipeline tasks.
Loome provides a sophisticated workbench for configuration of job and task dependencies, scheduling, detailed logging, automated notifications and API access for dynamic task creation and execution.
Loome can execute tasks located as scripts in a GIT repository, entered via a web interface or by executing operations within a database. Loome includes support for native execution of SQL, Python, Spark, HIVE, PowerShell/PowerShell Core and Operating System commands.
Loome also simplifies control of deployment across multiple environments, and approval of changes between Development, Test and Production environments. Loome also allows you to scale your advanced pipelines to take advantage of on-demand clusters without changing a single line of code.
Data Quality Hub in Spark SQL
Loome allows you to easily monitor Data Quality exceptions reinforcing Spark SQL as your strategic Data Quality Hub.
Loome keeps an audit trail of resolved issues, and proactively manages data quality with a fully automated data quality engine generating audience targeted alerts in real-time.
Solution Scenarios
What are the Must-Have Attributes of a Modern Data Warehouse?
ETL vs ELT Pipelines in Modern Data Platforms
Why Data Lake Architecture is not a Silver Bullet for Analytics
Simplifying Financial Reporting
How Automated Alerts Can Help With Business Process Compliance
How Does a Customer Data Platform Provide Marketing Insight?