cancel
Showing results for 
Search instead for 
Did you mean: 

Incremental Refresh / Appending / Stacking Data in Paxata

Incremental Refresh / Appending / Stacking Data in Paxata

Morning,

Quick question. I was wondering if Paxata has the capability to perform an incremental refresh. I have a set of excel files I receive every month and would like to stack it on a monthly basis. This would be considered an incremental refresh. Unfortunately, I haven't figured out a way to complete this task in Paxata. 

It doesn't seem paxata supports this. I tried creating a standard excel file then adding a version, but that didn't work. Then tried creating a foundation data set then created another data set which was set to automate (new data) and then appending that on the foundational data set within a project hoping that project would keep all the appended data. However, no dice there as well.

Any thoughts?
Labels (1)
13 Replies

Hi @bella21 ,

To add to @sayyar response, please let me know if it helps to have a quick Zoom session and build a working prototype. I will be happy to set it up.

With Best Regards
Sudheer Kumar


Thanks @sayyar,


That seems like a pretty challenging mulit-tiered process in order to do a simple stacking. However, thanks for figuring this out. Instead of doing it that way with multi-steps. I created a historical master view to be refreshed as the foundation than stacked with new data pulled from sharepoint.

On a side note, other programs have the ability to recognize changes in data per a specific data element. Maybe this would be a great idea to institute in Paxata?

sayyar
Linear Actuator

@ychamb, 
I am glad that you are able to perform this in another manner as well. Would you please share the steps you took to accomplish it so that other users will also benefit? 

Thank you for the feedback on the ability to recognize changes in data. We will look into adding the capability. 

Regards,
Shyam Ayyar

Hey morning @sayyar,

As mentioned above.

I created a historical master view to be refreshed as the foundation than stacked with new data pulled from sharepoint.

Basically, automated the historical/foundational piece, then automated the share point data and created a project which stacked them together.

Manual  - However, when my sharepoint data reaches its record limit. I have to export half the records and paste them into the historical master and delete them manually from the sharepoint (eliminate dupes).