We're looking into an issue with broken attachments right now. Please stay tuned!
Data scientists spend over 80% of their time collecting, cleansing, and preparing data for machine learning. You can significantly simplify this with DataRobot Paxata. Using "clicks instead of code" reduces your data prep time from months to minutes and gets you to reliable predictions faster.
In this Ask the Expert event, you will be able to chat with Krupa and ask your questions about data prep. On this interesting and important topic, Krupa is available to help clarify and answer your questions.
|Krupa Natarajan is a Product Management leader at DataRobot. Krupa has spent over a decade leading multiple Data Management products and has deep expertise in the space. She has a passion for product innovations that deliver customer value and a proven track record of driving vision to execution.|
This Ask the Expert event is now closed.
Thank you Krupa for being a terrific event host!
Let us know your feedback on this event, suggestions for future events, and look for our next Ask the Expert event coming soon.
Hi @knat ,
That you for taking my question. My question is what's the difference between data prep for business intelligence / data warehousing and data prep for machine learning / AI?
Thank you, Nicole
Hi Krupa - thanks for taking my question!
I'm very interested in the capabilities around data prep, as it's a critical step in the process for everyone. My question is, can I run a real-time prediction pipeline in Paxata?
Hi Nicole! A number of steps are common while there are some key differences.
Both BI and ML/AI use cases require that the user has access to data from a variety of data sources and ability to work with a variety of data formats, join datasets together, cleanse and standardize the data (this step is very important to ensure prediction quality), perform transformations, aggregations and such.
In addition, data prep exercises for ML/AI, can be split into two distinctive life cycles: (a) Training Dataset preparation and (b) Inference/Prediction Dataset Preparation
For Training datasets, the Data Scientist/Business Analyst preparing data should address the following critical aspects based on the business value they are trying to achieve
For Prediction time Data prep, you will need the data prep tool to operationalize and potentially automate as many of the data acquisition, data merging, cleansing and transformation steps, before the data can be sent to deployed models for generating prediction scores. In many cases, after scores are returned, more data prep steps may be applied
Thank you for your interest.
Paxata has a new 'Predict Tool' that allows DataRobot deployments to be invoked directly from Data Prep projects. The data acquisition + data prep steps + prediction scoring can all be operationalized using the Intelligent Automation capability that exists in Paxata and scheduled to run automatically or on-demand.
The is accessible through REST API as well, enabling near real time predictions
Great question! Paxata has been the leader in the Data Prep market (by major Analyst reports such as the Gartner Magic Quadrant), and now with the merger of DataRobot and Paxata, DataRobot combines the best in class in Data Prep with the best in class in Enterprise AI platform.
DataRobot Paxata is the only Data Prep offering that enables Data Scientists and Business Analysts to interact with their full scale of data without being limited to small samples. This is a key differentiator when it comes to enabling users to identify data quality issues and cleanse the data for ML exercises.
DataRobot Paxata also has unique intelligence capabilities such as the patented Join detection. DataRobot Paxata automatically identifies how datasets Join together for Feature Enrichment. Algorithmic Fuzzy Join is supported for scenarios where the enrichment data coming from different systems and applications may be represented in different ways making exact matches nearly impossible - in common scenarios such as this, DataRobot Paxata's Fuzzy matching allows for Feature Enrichment regardless of the variation in data.
Another important capability is DataRobot Paxata's algorithmic standardization - with a single click, Paxata will identify similar values (example: misspellings in City names) in Categorical variables and standardize them leading to better training data and hence better prediction quality
DataRobot Paxata is closely integrated with the DataRobot core allowing for exploration of the AI Catalog from within the Data Prep experience, ability to invoke deployed models for prediction scoring from within the Data Prep project using the Predict tool and exploration of Prediction results including Prediction explanations in the Data Prep Project for better conversion of prediction to value
There is no hard technical limit on Dataset/File sizes in DataRobot Paxata. There are however guardrails that are configurable. For ideal interactive user experience, typically you (admin in this case) will configure the relevant number of Spark cores needed to support the dataset sizes. DataRobot Paxata customer success teams can help determine the sizing. It is possible to configure limits on number of rows that user will interact with in their Project when creating Data Prep steps (typically in 10s of millions) and set a different limit on the number of rows that can be processed in a Batch job when the data prep steps are applied, with the ability to dynamically scale resources for completing the batch jobs.
In most Data Preparation exercises, Business Analysts and Data Scientists are working with raw Data from more than one Data Source (such as Database tables, Cloud Storage files, Cloud application data etc). Once the data preparation steps are applied, the prepared data (referred to as an 'Answerset' in DataRobot Paxata) is used in ML platforms for training models or running predictions.
Although DataRobot Paxata supports a variety of DataSources to which you can write the data back, typically the prepared data is written to AI Catalog, Cloud Storage or Data Warehouses
Fuzzy matching help in scenarios where you will need to join Data from different Data Sources and Data may not be represented in the exact same way. For example, Customer name may be 'Danny Pool Service' in one Dataset and 'Danny's Pool Service & Repair' in another.
DataRobot Paxata uses a number of algorithmic techniques such as application of Jaro Winkler and automatic detection of stop words (such as 'and', 'Inc', 'Jr' etc) to determine matches
Hi @c_stauder !
While there are a number of relevant transformations, the most important and heavily used ones would be:
A number of other transformations deserve mention - Remove Rows tool (for removing unwanted observations), Filtergrams (aid the visual exploration of data and selection of criteria for remove rows and other transformations), Aggregate operations such as fixed/sliding windowed aggregates, Imputation functions such as linear/average fill up and down, Shaping operations such as Pivot/De-Pivot and so on.
All of these transformations are automatically captured in the Step Editor for replay and/or sharing. DataRobot Paxata also allows multiple users to collaborate on a single Project while defining transformations
Hi @sallyS !
Paxata and DataRobot compliment traditional data catalogs. Users typically leverage a traditional catalog to locate data they are looking for and once they find that data they can bring that into DataRobot Paxata for preparation and then leverage prepared data in their AutoML exercise
Yes, you absolutely can. You can schedule a DataRobot Paxata Automatic Project Flow (APF) to go from ingestion of data to data prep to scoring to post scoring data prep steps to export - this end to end workflow can be run as a single Job either on schedule or on-demand through UI/REST API
Great question. Traditional ETL typically caters to Data Engineers and IT developers that are very technical. IT developers receive requirements from business counterparts and implement the requirements into data pipelines. This is a waterfall model with the lifecycle involving requirements gathering, implementation, testing and delivery/acceptance by business. Any further changes that the business needs will start back at the top of that life cycle.
Data Preparation tools on the other hand are built for Business Analysts. Business Analysts can interact with their data and interactively apply data cleansing and data transformation steps. In order to enable Business Analysts to achieve this, Data Preparation tools often embed intelligence and recommendations. For example, DataRobot Paxata can automatically detect Joins across datasets and bring datasets together while this would have traditionally been achieved through SQL scripts by an IT developer in an ETL tool
Another key difference is the nature of use cases. ETL tools have been very successful in loading data to enterprise warehouses where the structure of the data rarely change. Data Preparation tools are helpful when businesses need to work with often changing data and/or new data, as Business Analysts can explore the data and create transformations in an adaptive way.