Sunday, February 26, 2012

Dynamic Data Source Expression

When i attempt to create a dynamic data source using an expression,
having tested the report with a constant data source string, the report
compilation fails with this error:
[rsUnexpectedCompilerError] An unexpected error occurred while
compiling expressions. Native compiler return value: '[BC32017]
Comma, ')', or a valid expression continuation expected.'
I am using Visual Studio 2005.
Simply changing the data source string from:
Data source = xxx;intial catalog=yyy
to:
=Data source = xxx;intial catalog=yyy
causes this error - i.e. I am not even referencing any parameters.
Has anyone encountered this before? Any resolutions?Amendment:
=Data source = xxx;intial catalog=yyy
should read:
="Data source = xxx;intial catalog=yyy"

Dynamic Data Source

Hi All,
In my project there is separate database for each client.
Based on the client i need to connect to the corresponding database and use
that as the data source to generate Report.
Is it possible to have dynamic data source without deploying all the reports
with different ds for all the clients.
Kindly let me know how to proceed ...
looking forward to your replies
ThanksYou can use data source expression since it will be executed at run time.
e.g="data source=" &Parameters!ServerName.Value & ";initial
catalog=AdventureWorks, the same can be for catalog as well.
But for your problem, you cant allow the user to select the dat source in
the form of a drop down.
If I have understood correctly, you need, depending on the users the DB
should be selected automatically. you can write custom code through asp.net.
infact this will be pretty simple.
Amarnath
"Toby" wrote:
> Hi All,
> In my project there is separate database for each client.
> Based on the client i need to connect to the corresponding database and use
> that as the data source to generate Report.
> Is it possible to have dynamic data source without deploying all the reports
> with different ds for all the clients.
> Kindly let me know how to proceed ...
> looking forward to your replies
> Thanks
>

Dynamic Data Source

Hi,
I'm facing a problem where I have to connect to difference dataservers
in the same report. I'm not using any asp applications. I want the
connection string to change dynamically upon the users request.
For example.
Consider a report A where I have to access the same database in two
different servers(production and test).
There exists a report parameter with values prod & test. All I want to
know is that it is possible to connect to difference dataservers(or
datasources) dynamically in the same report based on user parameters.
Please note that I'm viewing the reports directly from the reporting
server. I do not have any ASP.NET application to use the
DataSources.cs. I might have to change the XML code.
Regards,
Thyagu.Yes this is possible in RS 2005. In RS 2000 you can do it but it is more
work. In RS 2000 you have to make your query expression based and refer to
the appropriate database based on the parameter. RS 2005 it is much easier
with the new support for expression based data sources.
For lots of reasons I recommend going to RS 2005.
--
Bruce Loehle-Conger
MVP SQL Server Reporting Services
<tdelli@.gmail.com> wrote in message
news:1136373739.521552.310060@.g49g2000cwa.googlegroups.com...
> Hi,
> I'm facing a problem where I have to connect to difference dataservers
> in the same report. I'm not using any asp applications. I want the
> connection string to change dynamically upon the users request.
> For example.
> Consider a report A where I have to access the same database in two
> different servers(production and test).
> There exists a report parameter with values prod & test. All I want to
> know is that it is possible to connect to difference dataservers(or
> datasources) dynamically in the same report based on user parameters.
> Please note that I'm viewing the reports directly from the reporting
> server. I do not have any ASP.NET application to use the
> DataSources.cs. I might have to change the XML code.
> Regards,
> Thyagu.
>

Dynamic Data sets in reports

Hi,

I'm using a matrix report where in i want to use two datasets in the same report. How can i make the dataset dynamic for a single report.

Regards

Can the two datasets be accessed from the same datasource/connection?

Can they be made to "express themselves" using exactly the same column names ?

>L<

Dynamic data set parameters during report run time

I need to pass the out put from a stored procedure as an input parameter to another stored procedure. I created a data set for each stored procedure. Fron the second data set parameter tab, I added a parameter to refer to the field from the first data set.

I get the error Fiels can not be used in query parameter expressions.

I tried two simple queries instead of stored procedures with first query feeding the second query. I get the same error. Is there any other other way to accomplish this?

Thank You.

Hello,

Are you trying to have a report parameter that is populated from one stored procedure and the data in the report is populated from another stored procedure?

Jarret

Dynamic Data in Page Header?

Hi all,
is there's a way to add fields to the Page Header?
We have to display dynamic data on each page...
Thanks
ThomasThomas,
What kind of data is this? If its a calculation, global parameter,
date, or parameters passed in, this is very easy by adding an
expression(Somehting like: Parameters!Param1.Value).
If its Fields, then you can create a group, and place that field in the
group header - but you must group it by some field.
Then Left Click on that row(the group header) and set
theRepeatOnEveryPage property to true.
I hope this helps.
regards,
Stas K.|||Hi Thomas
No, you cannot add data fields to the Page Header, because it cannot have a
defined dataset. However, depending on the your data and how you want to
display it, here is a another solution:
1) get rid of the page header section of the report,
2) in the body of the report, use a table as the main container and make it
as wide as the width of the report, and make the table top start at the very
top of the report.
3) define the dataset for that table in the properties
4) add a table header row, and use that as the "page header". Make sure to
check the "Repeat on New page" option for the header row properties.
5) select all cells in the header row, and merge them.
6) drop a rectangle in the newly merged header cell, and voila, you have a
freeform container for your cell, so you can drop textboxes in there, etc...
In most of my reports, I have done that, instead of using a page header,
since in most cases I need to use a datafield as part of the header anyways.
You can have several header rows defined for each section in a table kinda
like this:
<table>
<tblHeader1>
<tblHeader2>
<tbleHeader3>
<group1Header1>
<group1Header2>
<group2Header1>
<group2Header2>
<detailRow>
<group2Footer1>
<group2Footer2>
<group1Footer1>
<group1Footer2>
<tblFooter1>
<tblFooter2>
</table>
Hope that helps ya.
--
Regards,
Thiago Silva
"Thomas" wrote:
> Hi all,
> is there's a way to add fields to the Page Header?
> We have to display dynamic data on each page...
> Thanks
> Thomas
>
>

Dynamic data elements for a data collection application

What is the better table design for a data collection application.
1. Vertical model (pk, attributeName, AttributeValue)
2. Custom columns (pk, custom1, custom2, custom3...custom50)

Since the data elements collected may change year over year, which
model better takes of this column dynamicness(mittal.pradeep@.gmail.com) writes:
> What is the better table design for a data collection application.
> 1. Vertical model (pk, attributeName, AttributeValue)
> 2. Custom columns (pk, custom1, custom2, custom3...custom50)
> Since the data elements collected may change year over year, which
> model better takes of this column dynamicness

The vertical model is certainly cleaner from a relational perspective.
It also requires less maintenance.

But admittedly queries can be more complex. If attributes can be of
different data types, you need some triggers to check this. A tip
is that the sql_variant data type is good in this case.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||1. But the data collection and reporting is in horizontal format. If
collected data is edited vertically, won't there be a extra steps of
converting horizontally obtained data to vertical and then vertical to
horizontal reports. In custom column model data always remains
horizontal. Won;t performance not be a issue in taking care of two
extra steps required in vertical model.
2. Won;t the concurrency be a issue, considering the fact that a
logical single horizontal row is edited as say 10 rows. Two people
might be changing same Primary key's different attributes at the same
time.

Erland Sommarskog wrote:
> (mittal.pradeep@.gmail.com) writes:
> > What is the better table design for a data collection application.
> > 1. Vertical model (pk, attributeName, AttributeValue)
> > 2. Custom columns (pk, custom1, custom2, custom3...custom50)
> > Since the data elements collected may change year over year, which
> > model better takes of this column dynamicness
> The vertical model is certainly cleaner from a relational perspective.
> It also requires less maintenance.
> But admittedly queries can be more complex. If attributes can be of
> different data types, you need some triggers to check this. A tip
> is that the sql_variant data type is good in this case.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||(mittal.pradeep@.gmail.com) writes:
> 1. But the data collection and reporting is in horizontal format. If
> collected data is edited vertically, won't there be a extra steps of
> converting horizontally obtained data to vertical and then vertical to
> horizontal reports. In custom column model data always remains
> horizontal. Won;t performance not be a issue in taking care of two
> extra steps required in vertical model.

If you are to present ten of those custom values as columns in a report,
you get a 10-way self-join. Certainly bulky in code. Performance is
probably not top-notch, but I don't see that it would be absymal.

> 2. Won;t the concurrency be a issue, considering the fact that a
> logical single horizontal row is edited as say 10 rows. Two people
> might be changing same Primary key's different attributes at the same
> time.

Good point. This can be handled fairly easily, but it requires more
careful programming than the horizontal method.

Overall, there certainly is a tradeoff. If the set of custom fields are
faily stable, only change once per year or so, you might be prepared to
take the extra maintenance cost. But if users asks for new fields every
week, then the horizontal method could be a nightmare.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Thanks a lot for the reply.
1. I am not able to understand why i require a 10 way join though. If i
have a mapping between custom column and actual column, all i need is a
dyanamic sql generated from the mapping.

E.g.
table
--
pk, custom1, custom2, custom3...custom10

map
--
customColName ActualColName
custom1 ActualName1
custom2 ActualName2
...

Now I can generate dynamic sql using map.

2. As long as number of custom columns is enough to take care data
element additions which happen in a year. All that is needed is
addition of new elements to mapping table to decifer newly assigned
custom columns

Erland Sommarskog wrote:
> (mittal.pradeep@.gmail.com) writes:
> > 1. But the data collection and reporting is in horizontal format. If
> > collected data is edited vertically, won't there be a extra steps of
> > converting horizontally obtained data to vertical and then vertical to
> > horizontal reports. In custom column model data always remains
> > horizontal. Won;t performance not be a issue in taking care of two
> > extra steps required in vertical model.
> If you are to present ten of those custom values as columns in a report,
> you get a 10-way self-join. Certainly bulky in code. Performance is
> probably not top-notch, but I don't see that it would be absymal.
> > 2. Won;t the concurrency be a issue, considering the fact that a
> > logical single horizontal row is edited as say 10 rows. Two people
> > might be changing same Primary key's different attributes at the same
> > time.
> Good point. This can be handled fairly easily, but it requires more
> careful programming than the horizontal method.
> Overall, there certainly is a tradeoff. If the set of custom fields are
> faily stable, only change once per year or so, you might be prepared to
> take the extra maintenance cost. But if users asks for new fields every
> week, then the horizontal method could be a nightmare.
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server SP3 at
> http://www.microsoft.com/sql/techin.../2000/books.asp|||Neither. The design flaw you are calling a vertical model is actually
known as "EAV" or "Entity-Attribute-Value" because it is a common
newbie mistake. I have no idea what your #2 means.

As your data elements change, you need to re-design the schema --
constraints, keys, data types, etc. Learn RDBMS and do it right.

I found an old "cut & paste". Someone like you posted this:

CREATE TABLE EAV -- no key declared
(key_col VARCHAR (10) NULL,
attrib_value VARCHAR (50) NULL);

INSERT INTO EAV VALUES ('LOCATION','Bedroom');
INSERT INTO EAV VALUES ('LOCATION','Dining Room');
INSERT INTO EAV VALUES ('LOCATION','Bathroom');
INSERT INTO EAV VALUES ('LOCATION','courtyard');
INSERT INTO EAV VALUES ('EVENT','verbal aggression');
INSERT INTO EAV VALUES ('EVENT','peer');
INSERT INTO EAV VALUES ('EVENT','bad behavior');
INSERT INTO EAV VALUES ('EVENT','other');

CREATE TABLE EAV_DATA -note lack of constraints, defaults, DRI
(id INTEGER IDENTITY (1,1) NOT NULL,
bts_id INTEGER NULL,
key_col VARCHAR (10) NULL,
attrib_value VARCHAR (50) NULL );

INSERT INTO EAV_DATA VALUES (1, 'LOCATION', 'Bedroom');
INSERT INTO EAV_DATA VALUES (1, 'EVENT', 'other');
INSERT INTO EAV_DATA VALUES (1, 'EVENT', 'bad behavior');
INSERT INTO EAV_DATA VALUES (2, 'LOCATION', 'Bedroom');
INSERT INTO EAV_DATA VALUES (2, 'EVENT', 'other');
INSERT INTO EAV_DATA VALUES (2, 'EVENT', 'verbal aggression');
INSERT INTO EAV_DATA VALUES (3, 'LOCATION', 'courtyard');
INSERT INTO EAV_DATA VALUES (3, 'EVENT', 'other');
INSERT INTO EAV_DATA VALUES (3, 'EVENT', 'peer');

Ideally, the result set of the query would be Location Event count
(headings if possible)

Bedroom verbal aggression 1
Bedroom peer 0
Bedroom bad behavior 0
Bedroom other 2
Dining Room verbal aggression 0
Dining Room peer 0
Dining Room bad behavior 0
Dining Room other 0
Bathroom verbal aggression 0
Bathroom peer 0
Bathroom bad behavior 0
Bathroom other 0
courtyard verbal aggression 0
courtyard peer 1
courtyard bad behavior 0
courtyard other 1

Also, if possible, another query would return this result set. (I think
I know how to do this one.)

Location Event count
Bedroom verbal aggression 1
Bedroom other 2
courtyard peer 1
courtyard other 1

Here is a From: Thomas Coleman

SELECT Locations.locationvalue, Events.eventvalue,
(SELECT COUNT(*)
FROM (SELECT LocationData.locationvalue, EventData.eventvalue

FROM (SELECT TD1.bts_id, TD1.value AS locationvalue
FROM eav_data AS TD1
WHERE TD1.key = 'location') AS LocationData
INNER JOIN
(SELECT TD2.bts_id, TD2.value AS eventvalue
FROM eav_data AS TD2
WHERE TD2.key = 'event'
) AS EventData
ON LocationData.bts_id = EventData.bts_id
) AS CollatedEventData
WHERE CollatedEventData.locationvalue = Locations.locationvalue
AND CollatedEventData.eventvalue = Events.eventvalue
FROM (SELECT T1.value AS locationvalue
FROM EAV AS T1
WHERE T1.key = 'location') AS Locations,
(SELECT T2.value AS eventvalue
FROM EAV AS T2
WHERE T2.key = 'event') AS Events
ORDER BY Locations.locationvalue, Events.eventvalue ,
SELECT Locations.locationvalue, Events.eventvalue
(SELECT COUNT(*)
FROM (SELECT LocationData.locationvalue, EventData.eventvalue

FROM (SELECT TD1.bts_id, TD1.value AS locationvalue
FROM eav_data AS TD1
WHERE TD1.key = 'location') AS LocationData
INNER JOIN
(SELECT TD2.bts_id, TD2.value AS eventvalue
FROM eav_data AS TD2
WHERE TD2.key = 'event') AS EventData
ON LocationData.bts_id = EventData.bts_id)
AS CollatedEventData
WHERE CollatedEventData.locationvalue = Locations.locationvalue
AND CollatedEventData.eventvalue = Events.eventvalue)
FROM (SELECT T1.value AS locationvalue
FROM EAV AS T1
WHERE T1.key = 'location') AS Locations,
(SELECT T2.value AS eventvalue
FROM EAV AS T2
WHERE T2.key = 'event') AS Events;

Is the same thing in a proper schema as:

SELECT L.locationvalue, E.eventvalue, COUNT(*)
FROM Locations AS L, Events AS E
WHERE L.btd_id = E.btd_id
GROUP BY L.locationvalue, E.eventvalue;

The reason that I had to use so many subqueries is that those entities
are all lopped into the same table. There should be separate tables for
Locations and Events.

The column names are seriously painful. Beyond the fact that I
personally hate underscores in column names, using underscores at the
end of the column name is really non-intuitive. I removed them for my
example and came across the next column name faux pas. Don't use "key"
and "value" for column names. It means that the developer *has*
surround the column name with square brackets for everything which is a
serious pain.

There is such a thing as "too" generic. There has to be some structure
or everything becomes nothing more than a couple of tables called
"things". The real key (no pun intended) is commonality. Is there a
pattern to the data that they want to store? It may not be possible to
create one structure to rule them all and in the darkness bind them.

"To be is to be something in particular; to be nothing in particular is
to be nothing." --Aristole

All data integrity is destroyed. Any typo becomes a new attribute or
entity. Entities are found missing attributes, so all the reports are
wrong.

ry to write a single CHECK() constraint that works for all the
attributes of those 30+ entities your users created because you were
too dumb or too lazy to do your job. It can be done! You need a case
expression almost 70 WHEN clauses for a simple invoice and order system
when I tried it as an exercise.

ry to write a single DEFAULT clause for 30+ entities crammed into one
column. Impossible!

Try to set up DRI actions among the entities. If you thought the WHEN
clauses in the single CASE expression were unmaintainable, wait until
you see the "TRIGGERs from Hell" -- Too bad that they might not fit
into older SQL Server which had some size limits. Now maintain it.

For those who are interested, there are couple of links to articles I
found on the net:

Generic Design of Web-Based Clinical Databases
http://www.jmir.org/2003/4/e27*/

The EAV/CR Model of Data Representation
http://ycmi.med.yale.edu/nadka*rni/eav_CR_contents.htm

An Introduction to Entity-Attribute-Value Design for Generic
Clinical Study Data Management Systems
http://ycmi.med.yale.edu/nadka*rni/...%20*systems.htm

Data Extraction and Ad Hoc Query of an Entity- Attribute- Value
Database
http://www.pubmedcentral.nih.g*ov/a...l=pub*med&pubme...

Exploring Performance Issues for a Clinical Database Organized Using
an Entity-Attribute-Value Representation
http://www.pubmedcentral.nih.g*ov/a...l=pub*med&pubme...|||(mittal.pradeep@.gmail.com) writes:
> Thanks a lot for the reply.
> 1. I am not able to understand why i require a 10 way join though. If i
> have a mapping between custom column and actual column, all i need is a
> dyanamic sql generated from the mapping.
> E.g.
> table
> --
> pk, custom1, custom2, custom3...custom10
>
> map
> --
> customColName ActualColName
> custom1 ActualName1
> custom2 ActualName2
> ...
> Now I can generate dynamic sql using map.

When I said 10-way join I was thinking of the vertical solution. For the
horisontal solution it's a simple join - once you have gone through all
that SQL building. To me, this sounds more complex to implement. Then
again, if the user selects dynamically which columns he wants to see,
the horizontal solution would require dynamic SQL as well.

> 2. As long as number of custom columns is enough to take care data
> element additions which happen in a year. All that is needed is
> addition of new elements to mapping table to decifer newly assigned
> custom columns

I didn't realise that you had this mapping table. One could say that
this is a kind of compromise between the horizonal model and an entirely
static vertical model.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||CELKO,

If I change the data elements 10 times during the year, I cannot change
the front end 10 to accomodate the same. I need to come up with a
dyanamic solution to take care of same. Vertical approach is the best
way to get the same. Using custom columns is a compromise, as vertical
approach is harder to implement.

--CELKO-- wrote:
> Neither. The design flaw you are calling a vertical model is actually
> known as "EAV" or "Entity-Attribute-Value" because it is a common
> newbie mistake. I have no idea what your #2 means.
> As your data elements change, you need to re-design the schema --
> constraints, keys, data types, etc. Learn RDBMS and do it right.
> I found an old "cut & paste". Someone like you posted this:
> CREATE TABLE EAV -- no key declared
> (key_col VARCHAR (10) NULL,
> attrib_value VARCHAR (50) NULL);
> INSERT INTO EAV VALUES ('LOCATION','Bedroom');
> INSERT INTO EAV VALUES ('LOCATION','Dining Room');
> INSERT INTO EAV VALUES ('LOCATION','Bathroom');
> INSERT INTO EAV VALUES ('LOCATION','courtyard');
> INSERT INTO EAV VALUES ('EVENT','verbal aggression');
> INSERT INTO EAV VALUES ('EVENT','peer');
> INSERT INTO EAV VALUES ('EVENT','bad behavior');
> INSERT INTO EAV VALUES ('EVENT','other');
> CREATE TABLE EAV_DATA -note lack of constraints, defaults, DRI
> (id INTEGER IDENTITY (1,1) NOT NULL,
> bts_id INTEGER NULL,
> key_col VARCHAR (10) NULL,
> attrib_value VARCHAR (50) NULL );
> INSERT INTO EAV_DATA VALUES (1, 'LOCATION', 'Bedroom');
> INSERT INTO EAV_DATA VALUES (1, 'EVENT', 'other');
> INSERT INTO EAV_DATA VALUES (1, 'EVENT', 'bad behavior');
> INSERT INTO EAV_DATA VALUES (2, 'LOCATION', 'Bedroom');
> INSERT INTO EAV_DATA VALUES (2, 'EVENT', 'other');
> INSERT INTO EAV_DATA VALUES (2, 'EVENT', 'verbal aggression');
> INSERT INTO EAV_DATA VALUES (3, 'LOCATION', 'courtyard');
> INSERT INTO EAV_DATA VALUES (3, 'EVENT', 'other');
> INSERT INTO EAV_DATA VALUES (3, 'EVENT', 'peer');
> Ideally, the result set of the query would be Location Event count
> (headings if possible)
> Bedroom verbal aggression 1
> Bedroom peer 0
> Bedroom bad behavior 0
> Bedroom other 2
> Dining Room verbal aggression 0
> Dining Room peer 0
> Dining Room bad behavior 0
> Dining Room other 0
> Bathroom verbal aggression 0
> Bathroom peer 0
> Bathroom bad behavior 0
> Bathroom other 0
> courtyard verbal aggression 0
> courtyard peer 1
> courtyard bad behavior 0
> courtyard other 1
> Also, if possible, another query would return this result set. (I think
> I know how to do this one.)
> Location Event count
> Bedroom verbal aggression 1
> Bedroom other 2
> courtyard peer 1
> courtyard other 1
> Here is a From: Thomas Coleman
> SELECT Locations.locationvalue, Events.eventvalue,
> (SELECT COUNT(*)
> FROM (SELECT LocationData.locationvalue, EventData.eventvalue
> FROM (SELECT TD1.bts_id, TD1.value AS locationvalue
> FROM eav_data AS TD1
> WHERE TD1.key = 'location') AS LocationData
> INNER JOIN
> (SELECT TD2.bts_id, TD2.value AS eventvalue
> FROM eav_data AS TD2
> WHERE TD2.key = 'event'
> ) AS EventData
> ON LocationData.bts_id = EventData.bts_id
> ) AS CollatedEventData
> WHERE CollatedEventData.locationvalue = Locations.locationvalue
> AND CollatedEventData.eventvalue = Events.eventvalue
> FROM (SELECT T1.value AS locationvalue
> FROM EAV AS T1
> WHERE T1.key = 'location') AS Locations,
> (SELECT T2.value AS eventvalue
> FROM EAV AS T2
> WHERE T2.key = 'event') AS Events
> ORDER BY Locations.locationvalue, Events.eventvalue ,
> SELECT Locations.locationvalue, Events.eventvalue
> (SELECT COUNT(*)
> FROM (SELECT LocationData.locationvalue, EventData.eventvalue
> FROM (SELECT TD1.bts_id, TD1.value AS locationvalue
> FROM eav_data AS TD1
> WHERE TD1.key = 'location') AS LocationData
> INNER JOIN
> (SELECT TD2.bts_id, TD2.value AS eventvalue
> FROM eav_data AS TD2
> WHERE TD2.key = 'event') AS EventData
> ON LocationData.bts_id = EventData.bts_id)
> AS CollatedEventData
> WHERE CollatedEventData.locationvalue = Locations.locationvalue
> AND CollatedEventData.eventvalue = Events.eventvalue)
> FROM (SELECT T1.value AS locationvalue
> FROM EAV AS T1
> WHERE T1.key = 'location') AS Locations,
> (SELECT T2.value AS eventvalue
> FROM EAV AS T2
> WHERE T2.key = 'event') AS Events;
> Is the same thing in a proper schema as:
> SELECT L.locationvalue, E.eventvalue, COUNT(*)
> FROM Locations AS L, Events AS E
> WHERE L.btd_id = E.btd_id
> GROUP BY L.locationvalue, E.eventvalue;
> The reason that I had to use so many subqueries is that those entities
> are all lopped into the same table. There should be separate tables for
> Locations and Events.
> The column names are seriously painful. Beyond the fact that I
> personally hate underscores in column names, using underscores at the
> end of the column name is really non-intuitive. I removed them for my
> example and came across the next column name faux pas. Don't use "key"
> and "value" for column names. It means that the developer *has*
> surround the column name with square brackets for everything which is a
> serious pain.
> There is such a thing as "too" generic. There has to be some structure
> or everything becomes nothing more than a couple of tables called
> "things". The real key (no pun intended) is commonality. Is there a
> pattern to the data that they want to store? It may not be possible to
> create one structure to rule them all and in the darkness bind them.
> "To be is to be something in particular; to be nothing in particular is
> to be nothing." --Aristole
> All data integrity is destroyed. Any typo becomes a new attribute or
> entity. Entities are found missing attributes, so all the reports are
> wrong.
> ry to write a single CHECK() constraint that works for all the
> attributes of those 30+ entities your users created because you were
> too dumb or too lazy to do your job. It can be done! You need a case
> expression almost 70 WHEN clauses for a simple invoice and order system
> when I tried it as an exercise.
> ry to write a single DEFAULT clause for 30+ entities crammed into one
> column. Impossible!
> Try to set up DRI actions among the entities. If you thought the WHEN
> clauses in the single CASE expression were unmaintainable, wait until
> you see the "TRIGGERs from Hell" -- Too bad that they might not fit
> into older SQL Server which had some size limits. Now maintain it.
> For those who are interested, there are couple of links to articles I
> found on the net:
> Generic Design of Web-Based Clinical Databases
> http://www.jmir.org/2003/4/e27*/
> The EAV/CR Model of Data Representation
> http://ycmi.med.yale.edu/nadka*rni/eav_CR_contents.htm
> An Introduction to Entity-Attribute-Value Design for Generic
> Clinical Study Data Management Systems
> http://ycmi.med.yale.edu/nadka*rni/...20*systems..htm
>
> Data Extraction and Ad Hoc Query of an Entity- Attribute- Value
> Database
> http://www.pubmedcentral.nih.g*ov/a...l=pub*med&pubme...
>
> Exploring Performance Issues for a Clinical Database Organized Using
> an Entity-Attribute-Value Representation
> http://www.pubmedcentral.nih.g*ov/a...l=pub*med&pubme...

Dynamic Data Connections

Hello,
I am designing a SRS integration where the data source for a report must be
changed depending on which database the user chooses. There are any number
of databases, all having the same schema. The user can select which database
they want to query in a drop-down list in the web application, and then
select the report to run.
Any ideas on the best way to dynamically set the connection at run-time?
I see that in RDL there is a tag for <ConnectionString> so I could set that.
However, I think with lots of people accessing the report this may cause
problems. Perhaps I need to make a .NET assembly that modifies the
connection string tag at run-time?
I am accessing the report via an internet shortcut to the URL, but it is
possible for me to modify the URL before it is called. Another possibility
is to include the connection string as a parameter and pass it in the URL.
This just seems a bit clunky to me, since all reports would have to have
this parameter. It would make it slightly more difficult to allow users to
design their own reports.
Any suggestions?
:-D Maliksee this:
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=uaVMmQoEEHA.2600%40TK2MSFTNGP09.phx.gbl
or search for "dynamic connection" in this newsgroup
Antonio
"Abdul Malik Said" wrote:
> Hello,
> I am designing a SRS integration where the data source for a report must be
> changed depending on which database the user chooses. There are any number
> of databases, all having the same schema. The user can select which database
> they want to query in a drop-down list in the web application, and then
> select the report to run.
> Any ideas on the best way to dynamically set the connection at run-time?
> I see that in RDL there is a tag for <ConnectionString> so I could set that.
> However, I think with lots of people accessing the report this may cause
> problems. Perhaps I need to make a .NET assembly that modifies the
> connection string tag at run-time?
> I am accessing the report via an internet shortcut to the URL, but it is
> possible for me to modify the URL before it is called. Another possibility
> is to include the connection string as a parameter and pass it in the URL.
> This just seems a bit clunky to me, since all reports would have to have
> this parameter. It would make it slightly more difficult to allow users to
> design their own reports.
> Any suggestions?
> :-D Malik
>
>|||Thank you for your quick response. After doing a search through the entire
newsgroup on this issue, I can see that setting a data source dynamically is
a common problem. However, I still have one question:
Is it possible to dynamically set a connection string or data source at
run-time by developing a Data Processing Extension? For my solution, it must
be set for that user's run-time only, and must not change the data source
for other users trying to access the same report at the same time. This is a
specific requirement of the integration.
I want to avoid as much as possible resorting to multiple copies of the
report, and setting the data source dynamically in the stored procedure.
These are workarounds, where I am hoping that DPE could allow me to write a
real solution. If DPE is the way to go, could you suggest any resources for
this? I know C# and I have read some of the Books Online stuff.
Malik
"Antonio Rome" <AntonioRome@.discussions.microsoft.com> wrote in message
news:F8D54E0A-8428-43E2-941D-C5DEA489349F@.microsoft.com...
> see this:
>
http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=uaVMmQoEEHA.2600%40TK2MSFTNGP09.phx.gbl
> or search for "dynamic connection" in this newsgroup
> Antonio
> "Abdul Malik Said" wrote:
> > Hello,
> >
> > I am designing a SRS integration where the data source for a report must
be
> > changed depending on which database the user chooses. There are any
number
> > of databases, all having the same schema. The user can select which
database
> > they want to query in a drop-down list in the web application, and then
> > select the report to run.
> >
> > Any ideas on the best way to dynamically set the connection at run-time?
> >
> > I see that in RDL there is a tag for <ConnectionString> so I could set
that.
> > However, I think with lots of people accessing the report this may cause
> > problems. Perhaps I need to make a .NET assembly that modifies the
> > connection string tag at run-time?
> >
> > I am accessing the report via an internet shortcut to the URL, but it is
> > possible for me to modify the URL before it is called. Another
possibility
> > is to include the connection string as a parameter and pass it in the
URL.
> > This just seems a bit clunky to me, since all reports would have to have
> > this parameter. It would make it slightly more difficult to allow users
to
> > design their own reports.
> >
> > Any suggestions?
> >
> > :-D Malik
> >
> >
> >|||No experience with DPE. You could change programmatically the connection string using web services to generate the report but the problem is the user's concurrency. I think this is a big issue in reporting services.
sorry
Antonio
"Abdul Malik Said" wrote:
> Thank you for your quick response. After doing a search through the entire
> newsgroup on this issue, I can see that setting a data source dynamically is
> a common problem. However, I still have one question:
> Is it possible to dynamically set a connection string or data source at
> run-time by developing a Data Processing Extension? For my solution, it must
> be set for that user's run-time only, and must not change the data source
> for other users trying to access the same report at the same time. This is a
> specific requirement of the integration.
> I want to avoid as much as possible resorting to multiple copies of the
> report, and setting the data source dynamically in the stored procedure.
> These are workarounds, where I am hoping that DPE could allow me to write a
> real solution. If DPE is the way to go, could you suggest any resources for
> this? I know C# and I have read some of the Books Online stuff.
>
> Malik
> "Antonio Rome" <AntonioRome@.discussions.microsoft.com> wrote in message
> news:F8D54E0A-8428-43E2-941D-C5DEA489349F@.microsoft.com...
> > see this:
> >
> >
> http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&selm=uaVMmQoEEHA.2600%40TK2MSFTNGP09.phx.gbl
> >
> > or search for "dynamic connection" in this newsgroup
> >
> > Antonio
> >
> > "Abdul Malik Said" wrote:
> >
> > > Hello,
> > >
> > > I am designing a SRS integration where the data source for a report must
> be
> > > changed depending on which database the user chooses. There are any
> number
> > > of databases, all having the same schema. The user can select which
> database
> > > they want to query in a drop-down list in the web application, and then
> > > select the report to run.
> > >
> > > Any ideas on the best way to dynamically set the connection at run-time?
> > >
> > > I see that in RDL there is a tag for <ConnectionString> so I could set
> that.
> > > However, I think with lots of people accessing the report this may cause
> > > problems. Perhaps I need to make a .NET assembly that modifies the
> > > connection string tag at run-time?
> > >
> > > I am accessing the report via an internet shortcut to the URL, but it is
> > > possible for me to modify the URL before it is called. Another
> possibility
> > > is to include the connection string as a parameter and pass it in the
> URL.
> > > This just seems a bit clunky to me, since all reports would have to have
> > > this parameter. It would make it slightly more difficult to allow users
> to
> > > design their own reports.
> > >
> > > Any suggestions?
> > >
> > > :-D Malik
> > >
> > >
> > >
>
>

Dynamic Data

Thanks for reading this post.
We have an application that can import data from various sources
containined in various formats. This data is to be stored in one table!
How do we handle this scenario? The only way I could think of is to
have a generic table with columns called columnName, stringValue and
numericValue and store every field in each row of the input data as a
row in the generic table.
Is this good design or is there is a better to handle the problem
please.
Thanks...Hi,John
When you say > We have an application that can import data from various
sources
> containined in various formats? do you mean that you get the data from
> TEXT files ,Access Database ,Excel documents?
"John Smith" <postmaster@.sumanthcp.plus.com> wrote in message
news:1138098782.505092.292110@.g44g2000cwa.googlegroups.com...
> Thanks for reading this post.
> We have an application that can import data from various sources
> containined in various formats. This data is to be stored in one table!
> How do we handle this scenario? The only way I could think of is to
> have a generic table with columns called columnName, stringValue and
> numericValue and store every field in each row of the input data as a
> row in the generic table.
> Is this good design or is there is a better to handle the problem
> please.
>
> Thanks...
>|||Hi Uri,
Yes, we get data in different physical formats but the data we get can
have diferent schema/columns. We import everything into SQLSERVER and
use the imported data for reporting. If we recoginse some columns then
they will be used to produce some hash totals. I hope I have given you
the picture.
Thanks|||John Smith wrote:
> Thanks for reading this post.
> We have an application that can import data from various sources
> containined in various formats. This data is to be stored in one table!
> How do we handle this scenario? The only way I could think of is to
> have a generic table with columns called columnName, stringValue and
> numericValue and store every field in each row of the input data as a
> row in the generic table.
> Is this good design or is there is a better to handle the problem
> please.
>
> Thanks...
The external format shouldn't determine your database design. Design
the correct logical data model first. Then worry about how to get the
data into it.
Since you have given us zero information about the data you are
modelling I can't help you with the design. Anyway, newsgroups are not
the place to solve design problems. Good design requires more knowledge
about your business and requirements than we can reasonably expect to
learn through an online discussion.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||John Smith (postmaster@.sumanthcp.plus.com) writes:
> Thanks for reading this post.
> We have an application that can import data from various sources
> containined in various formats. This data is to be stored in one table!
> How do we handle this scenario? The only way I could think of is to
> have a generic table with columns called columnName, stringValue and
> numericValue and store every field in each row of the input data as a
> row in the generic table.
> Is this good design or is there is a better to handle the problem
> please.
Whether this a good design or not depends on what you will use the data
for. Without further knowledge, I would not put much faith in this
solution though. And you did mention reporting - that reporting will
not be any fun.
Probably you need to conduct further analysis of the data import, so you
can define more tables.
As for stringValue, numericValue etc, you may want to investigate the
type sql_variant, that can host any other datatype in SQL Server, save
text, ntext and image.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||The real question is - what are you doing with this data. When you have the
answer to that, you'll have the fundamentals of your logical model. Which
still has little to do with actually creating tables.
So, please tell us what this data is for (on your end) and we can guide you
to a solution. I can't imagine anyone just coming up with a solution based o
n
your post.
ML
http://milambda.blogspot.com/|||Look up various ETL tools for this kind of problem. They will do the
data scrubbing and format conversions. I would take a look at
Sunopsis.
numericValue and store every field [sic] in each row of the input data
as a row in the generic table. <<
There is no such animal as a "generic table" in a properly designed
RDBMS. Tables model one and only kind of entity and all the attributes
are clearly defined. To be is to be something in particular; to be
nothing in particular or everything in general is to be nothing at all
-- Aristotle.
The design flaw you have re-discovered is called EAV and you can
Google it.|||Thanks to all for your answers,
In fact we are trying to develop a tool which is capable of ELT and
that's why we need a central table to hold input data temporarily
before transforming it to create files/other databases.
The full picture is as follows: Our application will recieve
files/databases from external sources. The data we receive will have
some known attributes like for example - Quantity, Name, Cost etc. but
there may be more information (very likely) and we will not know what
those extra columns would be. We are planning to create two tables -
Core and Extra. Core will contain all the columns we know before hand
and will be used to run queries/reports (like checking hash values etc)
and Extra table which will be a EAV table used only during tranforming
data to other formats (formats are dynamic - users define them).
Is there an alternative elegant way of solving our problem or do we
have to live with the EAV flaw?|||John Smith wrote:
> Thanks to all for your answers,
> In fact we are trying to develop a tool which is capable of ELT and
> that's why we need a central table to hold input data temporarily
> before transforming it to create files/other databases.
> The full picture is as follows: Our application will recieve
> files/databases from external sources. The data we receive will have
> some known attributes like for example - Quantity, Name, Cost etc. but
> there may be more information (very likely) and we will not know what
> those extra columns would be. We are planning to create two tables -
> Core and Extra. Core will contain all the columns we know before hand
> and will be used to run queries/reports (like checking hash values etc)
> and Extra table which will be a EAV table used only during tranforming
> data to other formats (formats are dynamic - users define them).
> Is there an alternative elegant way of solving our problem or do we
> have to live with the EAV flaw?
I would echo Joe's suggestion that you consider off-the-shelf data
integration tools. Yours is precisely the type of application where
those packages have benefits over and above hand-coding your own
transformations. Specifically, you have changing metadata and so your
transformations won't be static. Therefore having a separate metadata
repository can work to your advantage because the tool will usually
take away some of the pain of generating the changing transformations.
As regards EAV I'd say that it won't sove your problem. There is no
substitute for developing properly normalized schema and EAV isn't
flexible enough to represent all non-relational data sources. How will
you model a XML hierarchy with EAV for example? Again, integration
tools can help because they will automate or semi-automate the process
of deriving a relational schema from the source. Also, bear in mind
that many of those tools expose their own API sothey are potential
still extensible with your own code.
Microsoft SQL Server Integration Services (SSIS)
www.microsoft.com/sql/technologies/...on/default.mspx
www.sqlis.com
Microsoft Data Transformation Services (DTS - the predecessor of SSIS)
www.sqldts.com
Other integration software
www-306.ibm.com/software/data/integration/dis/
www.abinitio.com
www.datamirror.com
www.datawatch.com
www.embarcadero.com/products/dtstudio/index.html
www.informatica.com
www.pervasive.com/solutions/
www.microsoft.com/biztalk/default.mspx
Enterprise Integration Patterns
www.enterpriseintegrationpatterns.com
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--

Dynamic Cursor/ Dynamic SQL Statement

I've looked up Books Online on Dynamic Cursor/ Dynamic SQL Statement.

Using the examples given in Books Online returns compilation errors. See below.

Does anyone know how to use Dynamic Cursor/ Dynamic SQL Statement?

James

-- SQL -----

EXEC SQL BEGIN DECLARE SECTION;
char szCommand[] = "SELECT au_fname FROM authors WHERE au_lname = ?";
char szLastName[] = "White";
char szFirstName[30];
EXEC SQL END DECLARE SECTION;

EXEC SQL
DECLARE author_cursor CURSOR FOR select_statement;

EXEC SQL
PREPARE select_statement FROM :szCommand;

EXEC SQL OPEN author_cursor USING :szLastName;
EXEC SQL FETCH author_cursor INTO :szFirstName;

--Error-------
Server: Msg 170, Level 15, State 1, Line 23
Line 23: Incorrect syntax near ';'.
Server: Msg 1038, Level 15, State 1, Line 24
Cannot use empty object or column names. Use a single space if necessary.
Server: Msg 1038, Level 15, State 1, Line 25
Cannot use empty object or column names. Use a single space if necessary.
Server: Msg 170, Level 15, State 1, Line 27
Line 27: Incorrect syntax near ';'.
Server: Msg 170, Level 15, State 1, Line 30
Line 30: Incorrect syntax near 'select_statement'.
Server: Msg 170, Level 15, State 1, Line 33
Line 33: Incorrect syntax near 'select_statement'.
Server: Msg 102, Level 15, State 1, Line 35
Incorrect syntax near 'author_cursor'.
Server: Msg 170, Level 15, State 1, Line 36
Line 36: Incorrect syntax near ':'.using the following seem to be achieving what i wanted.
but i would still like to know how to use
1. PREPARE
2. EXECUTE
i.e. under what circumstances would you use those 2?
It must be there for a reason.

James

--Working SQL-----

DECLARE @.sql nvarchar(4000)
SET @.sql = 'DECLARE @.name nvarchar(128) ' +
'DECLARE test_cursor CURSOR FOR SElECT name FROM class_category ' +
'OPEN test_cursor ' +
'FETCH NEXT FROM test_cursor INTO @.name ' +
'BEGIN ' +
'PRINT @.name ' +
'FETCH NEXT FROM test_cursor INTO @.name ' +
'END '
EXECUTE sp_executesql @.sql|||You seem to be confusing the VB environment with the Transact-SQL environment. The VB code executes only on the client side, and that is where a PREPARE might make sense. The Transact-SQL code executes only on the server side, and you can't explicitly prepare code there (at least under normal circumstances).

-PatP

Dynamic Cursor versus Forward Only Cursor gives Poor Performance

Hello,

I have a test database with table A containing 10,000 rows and a table
B containing 100,000 rows. Rows in B are "children" of rows in A -
each row in A has 10 related rows in B (ie. B has a foreign key to A).

Using ODBC I am executing the following loop 10,000 times, expressed
below in pseudo-code:

"select * from A order by a_pk option (fast 1)"
"fetch from A result set"
"select * from B where where fk_to_a = 'xxx' order by b_pk option
(fast 1)"
"fetch from B result set" repeated 10 times

In the above psueod-code 'xxx' is the primary key of the current A
row. NOTE: it is not a mistake that we are repeatedly doing the A
query and retrieving only the first row.

When the queries use fast-forward-only cursors this takes about 2.5
minutes. When the queries use dynamic cursors this takes about 1 hour.

Does anyone know why the dynamic cursor is killing performance?
Because of the SQL Server ODBC driver it is not possible to have
nested/multiple fast-forward-only cursors, hence I need to explore
other alternatives.

I can only assume that a different query plan is getting constructed
for the dynamic cursor case versus the fast forward only cursor, but I
have no way of finding out what that query plan is.

All help appreciated.

KevinPlease explain what you are trying to do here. Cursors are usually best
avoided and typically perform much less efficiently than set-based
solutions. If you describe the problem in more detail someone should be able
to suggest an alternative that doesn't use a cursor. Post DDL (CREATE TABLE
statements), some sample data (INSERT statements) and show your required
result.

--
David Portas
SQL Server MVP
--

Dynamic cursor variable

i have this code in which i define a @.TempTableCursor
but is there a way that the TargetTable will be
TargetTable1,TargetTable2,..TargetTable(i)
so when i define set the @.TempTableCursor it will have in the
defenition the TargetTable with a dynamic changing number?
[code]
Declare @.TempTableCursor cursor
Set @.TempTableCursor = Cursor Local FAST_FORWARD
For Select * From TargetTable
[/code]
thnaks in advance
peleg
On Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> i have this code in which i define a @.TempTableCursor
> but is there a way that the TargetTable will be
> TargetTable1,TargetTable2,..TargetTable(i)
> so when i define set the @.TempTableCursor it will have in the
> defenition the TargetTable with a dynamic changing number?
> [code]
> Declare @.TempTableCursor cursor
> Set @.TempTableCursor = Cursor Local FAST_FORWARD
> For Select * From TargetTable
> [/code]
> thnaks in advance
> peleg
declare @.sql nvarchar(4000)
declare @.table varchar(100)
set @.table = 't'
set @.sql = N'
set @.cur = cursor for
select
* from ' + @.table + '; open @.cur'
exec sp_executesql @.sql, N'@.cur cursor output', @.cur
output
if cursor_status('variable', '@.cur') = 1
begin
.....................
.....................
end
if cursor_status('variable', '@.cur') >= 0
close @.cur
deallocate @.cur
Regards
Amish Shah
http://shahamish.tripod.com
|||thnaks alot
"amish" wrote:

> On Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> declare @.sql nvarchar(4000)
> declare @.table varchar(100)
> set @.table = 't'
> set @.sql = N'
> set @.cur = cursor for
> select
> * from ' + @.table + '; open @.cur'
>
> exec sp_executesql @.sql, N'@.cur cursor output', @.cur
> output
>
> if cursor_status('variable', '@.cur') = 1
> begin
> .....................
> .....................
> end
>
> if cursor_status('variable', '@.cur') >= 0
> close @.cur
>
> deallocate @.cur
> Regards
> Amish Shah
> http://shahamish.tripod.com
>
|||On Aug 2, 5:38 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> thnaks alot
>
> "amish" wrote:
>
>
>
>
>
> - Show quoted text -
:-)

Dynamic cursor variable

i have this code in which i define a @.TempTableCursor
but is there a way that the TargetTable will be
TargetTable1,TargetTable2,..TargetTable(i)
so when i define set the @.TempTableCursor it will have in the
defenition the TargetTable with a dynamic changing number?
[code]
Declare @.TempTableCursor cursor
Set @.TempTableCursor = Cursor Local FAST_FORWARD
For Select * From TargetTable
[/code]
thnaks in advance
pelegOn Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> i have this code in which i define a @.TempTableCursor
> but is there a way that the TargetTable will be
> TargetTable1,TargetTable2,..TargetTable(i)
> so when i define set the @.TempTableCursor it will have in the
> defenition the TargetTable with a dynamic changing number?
> [code]
> Declare @.TempTableCursor cursor
> Set @.TempTableCursor = Cursor Local FAST_FORWARD
> For Select * From TargetTable
> [/code]
> thnaks in advance
> peleg
declare @.sql nvarchar(4000)
declare @.table varchar(100)
set @.table = 't'
set @.sql = N'
set @.cur = cursor for
select
* from ' + @.table + '; open @.cur'
exec sp_executesql @.sql, N'@.cur cursor output', @.cur
output
if cursor_status('variable', '@.cur') = 1
begin
.....................
....................
end
if cursor_status('variable', '@.cur') >= 0
close @.cur
deallocate @.cur
Regards
Amish Shah
http://shahamish.tripod.com|||thnaks alot
"amish" wrote:

> On Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> declare @.sql nvarchar(4000)
> declare @.table varchar(100)
> set @.table = 't'
> set @.sql = N'
> set @.cur = cursor for
> select
> * from ' + @.table + '; open @.cur'
>
> exec sp_executesql @.sql, N'@.cur cursor output', @.cur
> output
>
> if cursor_status('variable', '@.cur') = 1
> begin
> .....................
> .....................
> end
>
> if cursor_status('variable', '@.cur') >= 0
> close @.cur
>
> deallocate @.cur
> Regards
> Amish Shah
> http://shahamish.tripod.com
>|||On Aug 2, 5:38 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> thnaks alot
>
> "amish" wrote:
>
>
>
>
>
>
>
>
>
> - Show quoted text -
:-)

Dynamic cursor variable

i have this code in which i define a @.TempTableCursor
but is there a way that the TargetTable will be
TargetTable1,TargetTable2,..TargetTable(i)
so when i define set the @.TempTableCursor it will have in the
defenition the TargetTable with a dynamic changing number?
[code]
Declare @.TempTableCursor cursor
Set @.TempTableCursor = Cursor Local FAST_FORWARD
For Select * From TargetTable
[/code]
thnaks in advance
pelegOn Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> i have this code in which i define a @.TempTableCursor
> but is there a way that the TargetTable will be
> TargetTable1,TargetTable2,..TargetTable(i)
> so when i define set the @.TempTableCursor it will have in the
> defenition the TargetTable with a dynamic changing number?
> [code]
> Declare @.TempTableCursor cursor
> Set @.TempTableCursor = Cursor Local FAST_FORWARD
> For Select * From TargetTable
> [/code]
> thnaks in advance
> peleg
declare @.sql nvarchar(4000)
declare @.table varchar(100)
set @.table = 't'
set @.sql = N'
set @.cur = cursor for
select
* from ' + @.table + '; open @.cur'
exec sp_executesql @.sql, N'@.cur cursor output', @.cur
output
if cursor_status('variable', '@.cur') = 1
begin
.....................
.....................
end
if cursor_status('variable', '@.cur') >= 0
close @.cur
deallocate @.cur
Regards
Amish Shah
http://shahamish.tripod.com|||thnaks alot
"amish" wrote:
> On Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> > i have this code in which i define a @.TempTableCursor
> > but is there a way that the TargetTable will be
> > TargetTable1,TargetTable2,..TargetTable(i)
> > so when i define set the @.TempTableCursor it will have in the
> > defenition the TargetTable with a dynamic changing number?
> > [code]
> > Declare @.TempTableCursor cursor
> > Set @.TempTableCursor = Cursor Local FAST_FORWARD
> > For Select * From TargetTable
> > [/code]
> >
> > thnaks in advance
> > peleg
> declare @.sql nvarchar(4000)
> declare @.table varchar(100)
> set @.table = 't'
> set @.sql = N'
> set @.cur = cursor for
> select
> * from ' + @.table + '; open @.cur'
>
> exec sp_executesql @.sql, N'@.cur cursor output', @.cur
> output
>
> if cursor_status('variable', '@.cur') = 1
> begin
> .....................
> .....................
> end
>
> if cursor_status('variable', '@.cur') >= 0
> close @.cur
>
> deallocate @.cur
> Regards
> Amish Shah
> http://shahamish.tripod.com
>|||On Aug 2, 5:38 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> thnaks alot
>
> "amish" wrote:
> > On Aug 2, 3:44 pm, pelegk1 <pele...@.discussions.microsoft.com> wrote:
> > > i have this code in which i define a @.TempTableCursor
> > > but is there a way that the TargetTable will be
> > > TargetTable1,TargetTable2,..TargetTable(i)
> > > so when i define set the @.TempTableCursor it will have in the
> > > defenition the TargetTable with a dynamic changing number?
> > > [code]
> > > Declare @.TempTableCursor cursor
> > > Set @.TempTableCursor = Cursor Local FAST_FORWARD
> > > For Select * From TargetTable
> > > [/code]
> > > thnaks in advance
> > > peleg
> > declare @.sql nvarchar(4000)
> > declare @.table varchar(100)
> > set @.table = 't'
> > set @.sql = N'
> > set @.cur = cursor for
> > select
> > * from ' + @.table + '; open @.cur'
> > exec sp_executesql @.sql, N'@.cur cursor output', @.cur
> > output
> > if cursor_status('variable', '@.cur') = 1
> > begin
> > .....................
> > .....................
> > end
> > if cursor_status('variable', '@.cur') >= 0
> > close @.cur
> > deallocate @.cur
> > Regards
> > Amish Shah
> >http://shahamish.tripod.com- Hide quoted text -
> - Show quoted text -
:-)

Dynamic Cursor Generation..

Hi Everybody,
I have a probs with dynamic generation.
I am writing the probs
======================================
create proc test
as
declare @.query varchar(500)
set @.query = 'select * from table'
--------------
declare mycur Cursor for Select * from table |
open mycur |
--------------
but instate of above block how can I dynamically generate this query?
------------
declare mycur Cursor for exec (@.query) |
------------
Or tell me the way.
Regards
Arijit Chatterjeehi, the problem was not clear what u want to do dynamiclly.
if u want to ftech rows thru proc, u can code like

create proc test
as
use db
exec(select * from table)

this might work

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!|||On 4 Nov 2003 21:04:27 -0800, arijitchatterjee123@.yahoo.co.in (Arijit
Chatterjee) wrote:
>but instate of above block how can I dynamically generate this query?
>------------
>declare mycur Cursor for exec (@.query) |
>------------

The way I've done stuff like this in the past is to stuff the results
into a temp table and run a cursor off that.

Example:
declare @.SQLstring varchar(100)
create table #foo (foovar int)

select @.SQLString = 'select distinct foovar from footable'

insert into #foo exec(@.SQLString)

declare foo_cursor for select distinct foovar from #foo
etc.

Obviously you really want to do this type of stuff when there is no
other way around it.

>Or tell me the way.
>Regards
>Arijit Chatterjee

dynamic cursor - sorting in declaration

Hello everybody!

I have a small table "ABC" like this:

id_position | value
--------
1 | 11
2 | 22
3 | 33

I try to use a dynamic cursor as below.
When the statement "order by id_position" in declare part of the cursor_abc
is omitted - cursor work as it should.
But when the statement "order by id_position" is used, cursor behave as
static one.
What's the matter, does anybody know?

Code:

declare @.id_position as int, @.value as int

DECLARE cursor_abc CURSOR
FOR
select id_position, value from abc
order by id_position

set nocount on
open cursor_abc
FETCH NEXT FROM cursor_abc
INTO @.id_position, @.value

WHILE @.@.FETCH_STATUS = 0
BEGIN

print @.id_position
print @.value
print '---------'

update abc set value=666 --next reading should give value=666

FETCH NEXT FROM cursor_abc
INTO @.id_position, @.value

END

CLOSE cursor_abc
DEALLOCATE cursor_abc
GO

Regards

Lucasukasz W. wrote:

Quote:

Originally Posted by

I try to use a dynamic cursor as below.


Cursors should be avoided if at all possible.

Quote:

Originally Posted by

print @.id_position
print @.value
print '---------'


Is this just a quick-and-dirty test? If you're trying to generate an
actual production file like this, then you should seriously consider
having the database output raw data, and using some separate tool to
apply formatting.

Quote:

Originally Posted by

update abc set value=666 --next reading should give value=666


This is obviously dummy code. What are you actually trying to do
here - apply some function and use the result to control which row is
output next? What does that function look like? It may be possible
to rewrite the whole thing without cursors; failing that, you should
seriously consider having the database output data unsorted, or sorted
in a simple fashion, and using some separate tool (possibly the same
one used to apply formatting) to apply the complex sort rule.|||Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, data types, etc. in
your schema are. Sample data is also a good idea, along with clear
specifications. It is very hard to debug code when you do not let us
see it.

In many years of writing SQL, I have seldom found a need for a cursor.
They usually run 1-2 orders of magnitude slwoer than a relational
solution.

When someone uses one, it is generally becasue they are mimicing a
magnetic tape file system, and probably violating the basic principle
of a tiered architecture that display is done in the front end and
never in the back end. This a more basic programming principle than
just SQL and RDBMS.

Finally, id_position is not an ISO-11179 data element name and it makes
no sense. Identifier of what? Position of what? You have two
adjectives without a noun. But I bet you mant it to be PHYSICAL
location because you are mimicing a magnetic tape file system, instead
of using SQL for an RDBMS.

What is your real problem? Show us and perhaps we can help you.

Dynamic Cursor

I am trying to use a dynamic cursor in a stored procedure:
The code looks like this :

/************************************************** ***
set @.sFormula = 'Monthlyformula'
set @.sStartDate = '02/01/2004'
set @.sEndDate = '02/01/2004'

exec('DECLARE APPGRIDROWS_METRICS CURSOR FOR select populateid From appgridrows where histdisplaygrid = 3 And '+ @.sFormula +' Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between '+ @.sStartDate +' and '+ @.sEndDate +')' )
/************************************************** ***

And this is what it is interpreting

select populateid From appgridrows where histdisplaygrid = 3 And Monthlyformula Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between 02/01/2004 and 02/01/2004)

My problem is Is there anyway that I can put the quotes before those dates('02/01/2004') so that my cursor has some records returned

Thanks in advance

SKwhat happens if you try

exec('DECLARE APPGRIDROWS_METRICS CURSOR FOR select populateid From appgridrows where histdisplaygrid = 3 And '+ @.sFormula +' Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between '''+ @.sStartDate +''' and '''+ @.sEndDate +''')' )

or is it

exec('DECLARE APPGRIDROWS_METRICS CURSOR FOR select populateid From appgridrows where histdisplaygrid = 3 And '+ @.sFormula +' Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between ''+ @.sStartDate +'' and ''+ @.sEndDate +'')' )|||Dynamic SQL AND a Cursor..

Johnny...tell him/her they've won...

Is this inside a sproc?

What are you ultimatley trying to do?

In other words, what action is applied to the cursor rows?|||Originally posted by Paul Young
what happens if you try

exec('DECLARE APPGRIDROWS_METRICS CURSOR FOR select populateid From appgridrows where histdisplaygrid = 3 And '+ @.sFormula +' Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between '''+ @.sStartDate +''' and '''+ @.sEndDate +''')' )

or is it

exec('DECLARE APPGRIDROWS_METRICS CURSOR FOR select populateid From appgridrows where histdisplaygrid = 3 And '+ @.sFormula +' Is Null and exists (SELECT 1 From PAYROLL_DATA_PERIOD as h Where h.id=1 and h.populateid=appgridrows.populateid and h.StartDate between ''+ @.sStartDate +'' and ''+ @.sEndDate +'')' )

Either of the Methods deosnt work|||I am trying to get records from a sql into cursor, in which the sql is determined by the input parameters for that SP. Thats why the sql is determined dynamically

Thanks

Originally posted by Brett Kaiser
Dynamic SQL AND a Cursor..

Johnny...tell him/her they've won...

Is this inside a sproc?

What are you ultimatley trying to do?

In other words, what action is applied to the cursor rows?

Dynamic cursor

Hi All…

I need to bind a DataGrid to server dynamic cursor.

Please help!

That is such a 'bad' and boneheaded idea, that there is no way I would offer help other than to suggest you 're-think' the issue.

Using a dynamic cursor in that manner means that locks will be placed on rows, and most likely, tables, and it will be up to the user's whim to close the form and release those locks -meaning that other users will be waiting, and waiting, and waiting.

It's rarely a good idea to use cursors, and in this scenario, that is NOT a good plan...

Dynamic Cube partitioing in MSAS2000

Hi,

Pl give me a way, by which i can dynamically make the cube's partitions. Actully every month on start i have to create a partition for that month manully. Can it make it automatic by any way?

Pl help me for this regard i need it urgently.

Thank and Regards

Abhishek

SQL Server Integration Services has a Analysis Services Execute DDL task which can be used to create partitions. The DDL for this task can be created by a script task which takes a template DDL and inserts the month specific information such as the partition name and ID and it's source table. You can then use the Analysis Services Processing Task to process those partitions. These tasks can be part of a scheduled SSIS package.

|||

Hi Matt,

Thanks for your response, But i am working in MS AS 2000. Pl help me for SQL Server 2000 reference.

Regards

Abhishek

|||Sorry, I missed that. Analysis Services 2000 can work with SQL Server Data Transformation Services in a similar fashion. You'll want to create a script task and use DSO to create the partitions and then yo can use the AS Processing task to process it.|||

Hi Carroll,

Thanks a lot to give me a way as DSO. It will be helpful for me if you send some demo code of "how to use it" and specially, how to use it using 'DTS'.

Regards

Abhishek

dynamic cube filter

Does anybody know how to put dynamically a filter into incremental update of a cube, which aborts the update if data is redundant.

I assume that time dimension in the cube needs to be checked ex. if the monthly data already exists in the cube, and it needs to be compared with a month from the view which underlies cubes fact table. However I don't know enough about MDX or cubes in general to figure this out :-)

Is it possible to delete , lets say a particular month from time dimension and all updates from this month, from a cube?

Not sure what your schenario is.

Few suggestions:

One take a look at the whitepaper talking about Analysis Services processing http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5_asprocarch.asp

Second to answer your question about deleting data from dimension. The answer is yes, but only in case if you are deleting only few memebers. If you only going do delelte one customer from the dimension it should be fine. But not deleting all days in a month in Time dimension. Analysis Server actually not going to delete intenal record for dimension member, it just going to mark it as deleted. So in time you'd have to fully reprocess your dimension.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

dynamic cube filter

Does anybody know how to put dynamically a filter into incremental update of a cube, which aborts the update if data is redundant.

I assume that time dimension in the cube needs to be checked ex. if the monthly data already exists in the cube, and it needs to be compared with a month from the view which underlies cubes fact table. However I don't know enough about MDX or cubes in general to figure this out :-)

Is it possible to delete , lets say a particular month from time dimension and all updates from this month, from a cube?

Not sure what your schenario is.

Few suggestions:

One take a look at the whitepaper talking about Analysis Services processing http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5_asprocarch.asp

Second to answer your question about deleting data from dimension. The answer is yes, but only in case if you are deleting only few memebers. If you only going do delelte one customer from the dimension it should be fine. But not deleting all days in a month in Time dimension. Analysis Server actually not going to delete intenal record for dimension member, it just going to mark it as deleted. So in time you'd have to fully reprocess your dimension.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Dynamic Crystal Reports w/ .NET

I am trying to replicate the following screen in a Crystal Reports for .NET.

http://bellsouthpwp.net/w/i/wilburton/Schedule.JPG

I am having problems. You cannot create a formula for the position/width of a box or textbox. Also, there doesn't appear to be a way to write code to format the template as each row is processed. Change the width/left of the boxes only changes the template before the data is processed and the report is created.

I talked to Crystal Reports technical support and there only decent suggestion was to use a bar chart. This almost works but due to the flexibility of the charting components I don't think I will be able to get anything reasonably close to this format.

Any ideas?Maybe a chart would work?

Dynamic Crystal Report

Dear All,
How can I populate a data report using a Sqlquery.
Thanks in Advance
DanaCreate a stored procedure with that query and design the report using that procedure

Dynamic Cross-Tab Query too long?

I am using the Dynamic Cross-Tab code supplied in an article from SQL
Server Magazine (http://www.winnetmag.com/SQLServer/...608/15608.html).
I modified the script to generate a temp table inside the stored
procedure, and then use this temp table as the source for the
cross-tab. However, the problem seems to be that the dynamic SQL
string generated by the script is longer than what can be stored in
the @.SQL variable. The Cross-tab works great, so long as the amount of
data to be pivoted is small.

Is there any way around this? E.g. a User defined type, or another
data type which can store more characters?

Thanks,

Tim

CREATE procedure CBN_CrossTab

@.StudyID varchar(100), --Model ID passed from web app - Only one model
can be selected
@.Level int --The level to which the taxonomy should be rolled up

As

DECLARE

@.Table as sysname, --Table to crosstab
@.OnRows as nvarchar(128), --Groupuing key values (on rows)
@.OnRowsAlias as sysname, --Alias for grouping cloumn
@.OnCols as nvarchar(128), --destination columns (on columns)
@.SumCol as sysname, --data cels
@.SQL AS varchar(8000), -- String to hold generated SQL String
@.NEWLINE as char(1) --Holds the New Line Character for the code

SET @.OnRowsAlias = Null
SET @.SumCol = Null
SET @.NEWLINE = CHAR(10)

-- Generate the Temp table for the taxa and counts
CREATE TABLE #RefOrganisms (sampleid int, txtTaxa varchar(75),
fltCount float)

INSERT INTO #RefOrganisms(sampleid, txtTaxa, fltCount)

SELECT dbo.tblsampledata.sampleid,
dbo.CBN_RecursTaxa(dbo.tblbenthic.organism_tsn, @.Level, " ") AS Taxa,
SUM(dbo.tblbenthic.[count] /
dbo.tblsitedetail.numberofreps) AS SumCount
FROM dbo.tblstudylist INNER JOIN
dbo.tblsite ON dbo.tblstudylist.studyid =
dbo.tblsite.study_id INNER JOIN
dbo.tblsitedetail ON dbo.tblsite.siteid =
dbo.tblsitedetail.site_id INNER JOIN
dbo.tblsampledata ON
dbo.tblsitedetail.sitedetailsid = dbo.tblsampledata.sitedetails_id
INNER JOIN
dbo.tblbenthic ON dbo.tblsampledata.sampleid =
dbo.tblbenthic.sample_id INNER JOIN
dbo.iter_intlist_to_table(@.StudyID) i ON
dbo.tblstudylist.studyid = i.number INNER JOIN
dbo.tblbenthictaxa ON dbo.tblbenthic.organism_tsn =
dbo.tblbenthictaxa.tsn
WHERE (dbo.tblsampledata.qaqc = 0) AND (dbo.tblsampledata.status =
2) AND (dbo.tblbenthictaxa.rank_id >= @.Level)
GROUP BY
dbo.tblsampledata.sampleid,
dbo.CBN_RecursTaxa(dbo.tblbenthic.organism_tsn, @.Level, " ")

-- Identify the Temp table info for the CrossTab

SELECT @.Table = '#RefOrganisms'
SELECT @.OnRows = 'sampleid'
SELECT @.OnCols = 'txtTaxa'
SELECT @.OnRowsAlias = Null
SELECT @.SumCol = 'fltCount'

--STEP1 BEGININNING OF SQL STRING
SET @.sql = 'SELECT'+ @.newline +' '+ @.onrows +
CASE
WHEN @.ONROWSALIAS IS NOT NULL THEN ' AS ' + @.ONROWSALIAS
ELSE ''
END

CREATE TABLE #KEYS(KEYVALUE NVARCHAR(100)NOT NULL PRIMARY KEY)

DECLARE @.KEYSSQL AS VARCHAR (1000)

SET @.KEYSSQL = 'INSERT INTO #KEYS ' + 'SELECT DISTINCT CAST(' +
@.ONCOLS + '
AS NVARCHAR(100)) ' + 'FROM ' + @.TABLE
EXEC (@.KEYSSQL)

DECLARE @.KEY AS NVARCHAR(100)

SELECT @.KEY = MIN(KEYVALUE) FROM #KEYS

WHILE @.KEY IS NOT NULL
BEGIN
SET @.SQL = @.SQL + ' ,'+ @.NEWLINE +
' SUM(CASE CAST(' + @.ONCOLS +
' AS NVARCHAR(100))' + @.NEWLINE +
' WHEN N''' + @.KEY +
''' THEN '+ CASE
WHEN @.SUMCOL IS NULL THEN '1'
ELSE @.SUMCOL
END + @.NEWLINE +
' ELSE 0' + @.NEWLINE +
' END) AS [' + @.KEY + ']'
SELECT @.KEY = MIN(KEYVALUE) FROM #KEYS
WHERE KEYVALUE > @.KEY
END

SET @.SQL = @.SQL + @.NEWLINE +
'FROM ' + @.TABLE + @.NEWLINE +
'GROUP BY ' + @.ONROWS + @.NEWLINE +
'ORDER BY ' + @.ONROWS

PRINT @.SQL --+ @.NEWLINE --FOR DEBUG
EXEC (@.SQL)
GO"Tim Pascoe" <tim.pascoe@.cciw.ca> wrote in message
news:19555f2b.0402240832.1e138923@.posting.google.c om...
> I am using the Dynamic Cross-Tab code supplied in an article from SQL
> Server Magazine
(http://www.winnetmag.com/SQLServer/...608/15608.html).
> I modified the script to generate a temp table inside the stored
> procedure, and then use this temp table as the source for the
> cross-tab. However, the problem seems to be that the dynamic SQL
> string generated by the script is longer than what can be stored in
> the @.SQL variable. The Cross-tab works great, so long as the amount of
> data to be pivoted is small.
> Is there any way around this? E.g. a User defined type, or another
> data type which can store more characters?
> Thanks,
> Tim

<snip
One possibility is to use several variables, then execute them like this:

EXEC(@.sql1 + @.sql2 + @.sql3 + ...)

See EXECUTE in Books Online for more information. There are data types which
hold more than 8000 characters (text, ntext), but they cannot be used with
EXEC().

Simon

Dynamic crosstab query in MS SQL Server 2000


Hello all!
I have a problem with creating crosstab query in MS SQL Server 2000. I
spent 8 hours on searching internet to achieve my succes but without
result. I would like to transform such data:
MRPController WK Value
C01 200505 1
C01 200505 1
C02 200505 2
C03 200506 4
C03 200506 7
C04 200505 1
C04 200507 5
into:
MRPController 200505 200506 200507
C01 2
C02 2
C03 4
C04 1 5
The data are updated once a w, that`s why I need a dynamic crosstab
query which let me receive such query in MS SQL Server 2000. I found out
that it is no so easy to create such cross tab query in MS SQL Server
2000, but I am wondering why it is so easy even in MS Access 1997 and
Excel 1997, and it is so tough case in MS SQL Server released in 2000. I
have search newsgroups, but I didn`t find anything whcih could help me.
I found some SQL procedures but they didn`t work. I heard that in MS SQL
Server 2005 there is a special function who let do it, but I have MS SQL
Server 2000 and I need to do this in this version on SQL Server. Is it
possible to do it? Is it some correct method to do it. Please be so kind
and help, but I already don`t know what to do and it is very wanted
query in my company. I didn`t think that I stuck on such query.
Thank you in advance for your help
I really apprieciate it
Marcin from Poland
*** Sent via Developersdex http://www.examnotes.net ***Take a look at this link
[url]http://www.experts-exchange.com/Databases/Microsoft_SQL_Server/Q_21205811.html[/ur
l]
I use the transform proc just as you described with great results. You
will have to make a few small modifications so the date is labeled to
your likeing. I have mine labled (Month Year i.e. March 2005).
GL|||Dynamic Crosstab Queries
http://www.windowsitpro.com/SQLServ...5608/15608.html
Dynamic Cross-Tabs/Pivot Tables
http://www.sqlteam.com/item.asp?ItemID=2955
AMB
"Marcin Zmyslowski" wrote:

>
> Hello all!
> I have a problem with creating crosstab query in MS SQL Server 2000. I
> spent 8 hours on searching internet to achieve my succes but without
> result. I would like to transform such data:
> MRPController WK Value
> C01 200505 1
> C01 200505 1
> C02 200505 2
> C03 200506 4
> C03 200506 7
> C04 200505 1
> C04 200507 5
> into:
> MRPController 200505 200506 200507
> C01 2
> C02 2
> C03 4
> C04 1 5
> The data are updated once a w, that`s why I need a dynamic crosstab
> query which let me receive such query in MS SQL Server 2000. I found out
> that it is no so easy to create such cross tab query in MS SQL Server
> 2000, but I am wondering why it is so easy even in MS Access 1997 and
> Excel 1997, and it is so tough case in MS SQL Server released in 2000. I
> have search newsgroups, but I didn`t find anything whcih could help me.
> I found some SQL procedures but they didn`t work. I heard that in MS SQL
> Server 2005 there is a special function who let do it, but I have MS SQL
> Server 2000 and I need to do this in this version on SQL Server. Is it
> possible to do it? Is it some correct method to do it. Please be so kind
> and help, but I already don`t know what to do and it is very wanted
> query in my company. I didn`t think that I stuck on such query.
> Thank you in advance for your help
> I really apprieciate it
> Marcin from Poland
> *** Sent via Developersdex http://www.examnotes.net ***
>

Dynamic Crosstab

Say I have the following output from a view on SQL Server 2000;
RegID StudID SeqNo EnrolNo Name Status
Prog AttendDate AttendCode
20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 8/8/2005 X
20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 15/8/2005 X
20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 22/8/2005 O
How do I massage this using a Stored Procedure into the following output?
RegID StudID SeqNo EnrolNo Name Status
Prog 8/8/2005 15/8/2005 22/8/2005
20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A X X O
Bearing in mind that the date columns are dynamic and can be anything from 0
to 18.
I have found some scripts but I couldn't get the mto work with dates :(
Thanks.See if these help.
Dynamic Crosstab Queries
http://www.windowsitpro.com/SQLServ...5608/15608.html
Dynamic Cross-Tabs/Pivot Tables
http://www.sqlteam.com/item.asp?ItemID=2955
AMB
"Mark Parter" wrote:

> Say I have the following output from a view on SQL Server 2000;
> RegID StudID SeqNo EnrolNo Name Status
Prog AttendDate AttendCode
> 20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 8/8/2005 X
> 20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 15/8/2005 X
> 20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A 22/8/2005 O
> How do I massage this using a Stored Procedure into the following output?
> RegID StudID SeqNo EnrolNo Name Status
Prog 8/8/2005 15/8/2005 22/8/200
5
> 20 0123036 01 0123036/1 COLIN ANSELL C ENCOMPOL/D1/A X X O
> Bearing in mind that the date columns are dynamic and can be anything from
0
> to 18.
> I have found some scripts but I couldn't get the mto work with dates :(
> Thanks.

dynamic cross tab without aggregation? PLEASE help

Ok..we have a db that the company wants to store roles in (ie,
'auditor', 'accountant', etc..) that are active for a specific date
range ...its a wierd paradigm, dont ask... but we need a specific # of
roles filled for each date...
here's the script
Create table tblRoles(RoleID int primary key identity(1,1), RoleName
nvarchar(255))
create table tblDates(DateID int primary key identity(1,1),
ProjectDate DateTime)
create table tblRolesForDate(RoleForDateID int primary key identity
(1,1), RoleID int, DateID int, NumberNeeded)
what I need to see is a table from that...like this.
1/1 1/2 1/3 1/4 1/5
auditor 0 5 5 2 0
account 5 5 5 0 15
etc...
any ideas on how to make that work? the dates across the top are
from the dates table, the names in the left column are from roles, and
the #'s come from RolesForDate...
Grrr
Replied in .programming
Please do NOT multi-post.
David Portas

dynamic cross tab without aggregation? PLEASE help

Ok..we have a db that the company wants to store roles in (ie,
'auditor', 'accountant', etc..) that are active for a specific date
range ...its a wierd paradigm, dont ask... but we need a specific # of
roles filled for each date...
here's the script
Create table tblRoles(RoleID int primary key identity(1,1), RoleName
nvarchar(255))
create table tblDates(DateID int primary key identity(1,1),
ProjectDate DateTime)
create table tblRolesForDate(RoleForDateID int primary key identity
(1,1), RoleID int, DateID int, NumberNeeded)
what I need to see is a table from that...like this.
1/1 1/2 1/3 1/4 1/5
auditor 0 5 5 2 0
account 5 5 5 0 15
etc...
any ideas on how to make that work? the dates across the top are
from the dates table, the names in the left column are from roles, and
the #'s come from RolesForDate...
GrrrReplied in .programming
Please do NOT multi-post.
David Portas

dynamic cross tab without aggregation? PLEASE help

Ok..we have a db that the company wants to store roles in (ie,
'auditor', 'accountant', etc..) that are active for a specific date
range ...its a wierd paradigm, dont ask... but we need a specific # of
roles filled for each date...
here's the script
Create table tblRoles(RoleID int primary key identity(1,1), RoleName
nvarchar(255))
create table tblDates(DateID int primary key identity(1,1),
ProjectDate DateTime)
create table tblRolesForDate(RoleForDateID int primary key identity
(1,1), RoleID int, DateID int, NumberNeeded)
what I need to see is a table from that...like this.
1/1 1/2 1/3 1/4 1/5
auditor 0 5 5 2 0
account 5 5 5 0 15
etc...
any ideas on how to make that work? the dates across the top are
from the dates table, the names in the left column are from roles, and
the #'s come from RolesForDate...
GrrrReplied in .programming
Please do NOT multi-post.
--
David Portas

Dynamic Cross Tab Using Cursors.

I have two tables
1)Rollout_detail
start_date Datetime,
contract_date Datetime,
budget_amt Money
store_id int(foriegn key referring store.store_id)
pan_number varchar(20)
roll_id int

2)store
store_id int(primary key)
skey varchar(10)

these two tables are tied with store_id
and in rollout_detail there can be many pan_numbers for a given
store.(pan_number + store ) are unique.

Now here is the problem.

I need to generate a cross tab report with
store,pan1_contract_date,pan1_budget_amt,pan1_star t_date,
pan2_Contract_date,pan2_budget_amt,pan2_start_date and so on.

I tried this with in a procedure.
------------------
CREATE PROCEDURE crosstab
@.roll_id INT
WITH ENCRYPTION
AS
DECLARE @.sql VARCHAR(8000),@.panNumber VARCHAR(20)
SET NOCOUNT ON
SET ANSI_WARNINGS OFF
DECLARE al_cursor CURSOR FOR SELECT DISTINCT pan_number AS p_pannumber
FROM rollout_detail WHERE roll_id=@.roll_id
OPEN al_cursor
SELECT @.sql='SELECT skey, '
FETCH NEXT FROM al_cursor INTO @.panNumber
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT @.sql=@.sql+'(CASE pan_number WHEN
'''+rtrim(@.panNumber)+''' THEN start_date END) as
P'+rtrim(@.panNumber)+'_sd,(CASE pan_number WHEN
'''+rtrim(@.panNumber)+''' THEN budget_amt END) as
P'+rtrim(@.panNumber)+'_ba,(CASE pan_number WHEN
'''+rtrim(@.panNumber)+''' THEN contract_date END) as
P'+rtrim(@.panNumber)+'_cd,'

FETCH NEXT FROM al_cursor INTO @.panNumber
END
SELECT @.sql=left(@.sql, len(@.sql)-1)+' '
SELECT @.sql=@.sql+'FROM rollout_detail rd,store s
WHERE rd.store_id=s.store_id AND roll_id='+cast(@.roll_id as varchar)

EXEC (@.sql)
CLOSE al_cursor
DEALLOCATE al_cursor
SET ANSI_WARNINGS ON

EXECUTE crosstab 1
------------------
I am getting multiple records for the same store
the result set is.

skey pan1_cd pan1_ba pan1_sd pan2_cd pan2_ba pan2_sd
1 12/2/04 400.0 3/4/05 NULL NULL NULL
1 NULL NULL NULL 5/6/04 566.00 3/4/04

I want the result set merged for each store.

Please help,

Thanks a million

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!"Jaidev Paruchuri" <jaip26@.hotmail.com> wrote in message
news:408fbf3d$0$200$75868355@.news.frii.net...
> I have two tables
> 1)Rollout_detail
> start_date Datetime,
> contract_date Datetime,
> budget_amt Money
> store_id int(foriegn key referring store.store_id)
> pan_number varchar(20)
> roll_id int
> 2)store
> store_id int(primary key)
> skey varchar(10)
> these two tables are tied with store_id
> and in rollout_detail there can be many pan_numbers for a given
> store.(pan_number + store ) are unique.
> Now here is the problem.
> I need to generate a cross tab report with
> store,pan1_contract_date,pan1_budget_amt,pan1_star t_date,
> pan2_Contract_date,pan2_budget_amt,pan2_start_date and so on.
> I tried this with in a procedure.
> ------------------
> CREATE PROCEDURE crosstab
> @.roll_id INT
> WITH ENCRYPTION
> AS
> DECLARE @.sql VARCHAR(8000),@.panNumber VARCHAR(20)
> SET NOCOUNT ON
> SET ANSI_WARNINGS OFF
> DECLARE al_cursor CURSOR FOR SELECT DISTINCT pan_number AS p_pannumber
> FROM rollout_detail WHERE roll_id=@.roll_id
> OPEN al_cursor
> SELECT @.sql='SELECT skey, '
> FETCH NEXT FROM al_cursor INTO @.panNumber
> WHILE @.@.FETCH_STATUS = 0
> BEGIN
> SELECT @.sql=@.sql+'(CASE pan_number WHEN
> '''+rtrim(@.panNumber)+''' THEN start_date END) as
> P'+rtrim(@.panNumber)+'_sd,(CASE pan_number WHEN
> '''+rtrim(@.panNumber)+''' THEN budget_amt END) as
> P'+rtrim(@.panNumber)+'_ba,(CASE pan_number WHEN
> '''+rtrim(@.panNumber)+''' THEN contract_date END) as
> P'+rtrim(@.panNumber)+'_cd,'
> FETCH NEXT FROM al_cursor INTO @.panNumber
> END
> SELECT @.sql=left(@.sql, len(@.sql)-1)+' '
> SELECT @.sql=@.sql+'FROM rollout_detail rd,store s
> WHERE rd.store_id=s.store_id AND roll_id='+cast(@.roll_id as varchar)
> EXEC (@.sql)
> CLOSE al_cursor
> DEALLOCATE al_cursor
> SET ANSI_WARNINGS ON
> EXECUTE crosstab 1
> ------------------
> I am getting multiple records for the same store
> the result set is.
> skey pan1_cd pan1_ba pan1_sd pan2_cd pan2_ba pan2_sd
> 1 12/2/04 400.0 3/4/05 NULL NULL NULL
> 1 NULL NULL NULL 5/6/04 566.00 3/4/04
>
> I want the result set merged for each store.
> Please help,
> Thanks a million
> *** Sent via Developersdex http://www.developersdex.com ***
> Don't just participate in USENET...get rewarded for it!

You're probably looking for something like this - using MAX() and GROUP BY:

select skey,
max(case when ...) as A,
max(case when ...) as B,
...
from
...
where
...
group by skey

Simon|||Simon,

That worked.
Thankyou very much for the solution.
Though it was simple,I was thinking about alternate solutions.I really
appreaciate you guys.

--Jay

*** Sent via Developersdex http://www.developersdex.com ***
Don't just participate in USENET...get rewarded for it!