You cannot alter the metadata of the data-flow pipeline. In english, that means you cannot change the names and data-types of the columns, not can you add or remove them.
However, you CAN dynamically set the external sources and destinations. Would this be sufficient for you?
-Jamie
|||Hi Jamie - thanks for the quick reply. I don't think this will be sufficient. The 200 tables are all different - we are replicationg tables from an Oracle 8i ERP database to SQL for reporting and analysis purposes. The metadata on each source-destination combination will be different from the next so this will be a problem. As I see it the only way to accomplish this concept is to dynamically create a new package for each table i.e each iteration of the ForEach loop. Do you agree?|||
OK, you have to create 200 packages. But you only have to create them once.
You are correct that the only other option is to dynamically build the package. That's not much fun, believe me!
-Jamie
|||
Thanks for that. That is disappointing as I was hoping for a more elegant solution than creating 200 separate packages.
If I was so hardheaded to try the dynamic building of the package, any ideas on the system overhead taken to dynamically build a package 200 times versus running 200 pre-built packages?
Also, could you suggest any examples on-line re dynamically building the data flow package using VB script?
|||Peter G D wrote:
Thanks for that. That is disappointing as I was hoping for a more elegant solution than creating 200 separate packages.
200 different requirements means 200 things to build. The complexity is in your requirement. I'm slightly confused how it could be made more elegant. I'd welcome your ideas though.
Peter G D wrote:
If I was so hardheaded to try the dynamic building of the package, any ideas on the system overhead taken to dynamically build a package 200 times versus running 200 pre-built packages?
Interesting one. I don't know is the honest answer but I'd love to know. It depends on alot of things, mainly on the amount of data you're moving. The larger dataset then the less the proportionate time to build the package.
Peter G D wrote:
Also, could you suggest any examples on-line re dynamically building the data flow package using VB script?
No way. You won't be able to do this using VBScript. I don't even think you can do it in the Script Task. You are in custom task territory.
-Jamie
|||
i think you you need to use ado.net to iterate over a lookup table that has the table name, source info, and destination info. for each table, you read the data into a recordset, then insert that data into the destination table. you should also probably use a transaction to rollback everything in the event of an error. all of this can be accomplished in a script task.
hope this helps.
|||Thanks Duane. I've approached the solution much as you prescribe. I've got a table which has the source info and destination info, I read this into an object variable in the package, then use the object recordset as the basis for the Foreach loop. I thought that I'd be able to dynamically change the source and destination information on the data flow task via a script task, and then rebuild the metadata on the data flow task also using a script task(the tables contain exactly the same column names so I naively thought the metadata could be rebuilt using column name matching). However I'm now pessimistic that this approach is possible.
I'm a little unclear on your solution. When you say "you read the data into a recordset" do you mean read it into an object variable?. (I don't have a development background so I'm a little slow on these concepts!). Can you point me to any examples using a similar approach?
|||Peter G D wrote:
Thanks Duane. I've approached the solution much as you prescribe. I've got a table which has the source info and destination info, I read this into an object variable in the package, then use the object recordset as the basis for the Foreach loop. I thought that I'd be able to dynamically change the source and destination information on the data flow task via a script task, and then rebuild the metadata on the data flow task also using a script task(the tables contain exactly the same column names so I naively thought the metadata could be rebuilt using column name matching). However I'm now pessimistic that this approach is possible.
Correct. You cannot do that.
-Jamie
|||
actually, i rather back away from the recordset solution. a better method would be to use raw files instead (for performance reasons). perhaps you could stage the data as raw xml when pulling it out of the source -- i'm not sure if this is the best way. then, you could load that staged data into the destination.Peter G D wrote:
I'm a little unclear on your solution. When you say "you read the data into a recordset" do you mean read it into an object variable?. (I don't have a development background so I'm a little slow on these concepts!). Can you point me to any examples using a similar approach?
unfortunately, i don't know of any examples to point you towards. all i can tell you is that this solution requires knowledge of ado.net.