Showing posts with label row. Show all posts
Showing posts with label row. Show all posts

Monday, March 12, 2012

Place Unsorted row at end of detail

Hi There

I have a row returned from my stored procedure called 'Total', I would like it to appear at the bottom of the detail section rows.

I have placed the below expression on the sort on the table, trouble is now that 'Total' is not being sorted it appears at the top. Is there a way to get it to the bottom?

=iif(Fields!Item.Value <> "Total", true ,false)

Thanks in advance

Dave

several possible options

reverse the true and false part of your expression|||

Thanks for the reply,

-I cant reverse the true and false, as i need the rest of the rows sorted alphabetically

-need it sorted ascending

-Use an inetger, not sure whta you mean here?

Thanks

Dave

|||

Then put 2 things in your sort of the table:

firstly sort by: =iif(Fields!Item.Value <> "Total", 1 ,2)
then sort by: =Fields!Item.Value

So all your detail rows will get a 1 in the first sort expression and be sorted by name in the second expression and your total filed will get a 2 in the first expression and hence come after your detail rows.

|||

Legend,

Thanks fella that worked a treat!

Place count in variable

Would like to know if this is even possible to do - using SQL Server 2000.
I am creating a procedure that truncates a table then inserts one row of dat
a.
In the third field I need to insert the count of another table.
I thought that this would work:
@.DetCount int = SELECT Count(*) FROM TableName
My question is can I assign a count(*) to a variable in a stored procedure?
I would then use the variable in the values clause of the insert statment.Hi Robert,
Yes - you can do that. You just need to wrap your Select Statement in
brackets.
e.g
Declare @.Count Int
Set @.Count = (Select count(*) From TableA)
Insert Into TableB (Col1)
Values (@.Count)
Select * From TableB
HTH
Barry|||Works perfect! Thanks a million.
"Barry" wrote:

> Hi Robert,
> Yes - you can do that. You just need to wrap your Select Statement in
> brackets.
> e.g
> Declare @.Count Int
> Set @.Count = (Select count(*) From TableA)
> Insert Into TableB (Col1)
> Values (@.Count)
>
> Select * From TableB
>
> HTH
> Barry
>|||The other option is to use the SELECT statement to assign values to a
variable:
SELECT @.Count = COUNT(*) FROM TableA
This is useful when setting the values of multiple variables at once,
e.g.:
SELECT @.Count = COUNT(*),
@.Avg = AVG(someColumn)
FROM TableA
HTH,
Stu

Wednesday, March 7, 2012

PK creation

Is there any way to speed up Primary key creation on a
table
I have a 60o million row table and its taking forever
I know that data in table ic clean in data in PK columns i
am ccreating is unique
alter table Profile_table add primary key (as_of_date,
PAN11_ACCOUNT_NUMBER, PRODUCT_CODE)
Thanks
SanjayIf this table doesn't already have too many foreign keys associated with it,
what I would do is create another table with a similar structure, add all
foreign keys you need to it, and then import the data over (USING DTS or
some data transfer method that doesn't perform any logging) from the
existent 60 million record table.
This is exactly what SQL server does (behind the scenes) when you try to add
a foreign key to the table, except that it logs the transfer of every single
row into the new table, and that's what takes up all the time. The way you
are doing it, is just a little bit more manual, and you are shutting off the
whole logging process, which saves you up A TON of time and should be
relatively faster.
Good Luck
Awah-
"Sanjay" <sanjayg@.hotmail.com> wrote in message
news:02fe01c37179$ea8ebb70$a101280a@.phx.gbl...
> Is there any way to speed up Primary key creation on a
> table
> I have a 60o million row table and its taking forever
> I know that data in table ic clean in data in PK columns i
> am ccreating is unique
> alter table Profile_table add primary key (as_of_date,
> PAN11_ACCOUNT_NUMBER, PRODUCT_CODE)
> Thanks
> Sanjay

PK and Timestamp in same table?

Is there any reason to have a row that is the PK/Identity and a row
that is datatype Timestamp in the same table?

Does this in any way help speeding up row updates?

Thanks,
lqI think you mean columns :P

Timestamps should have nothing to do with speeding up updates (or any
query). They should also not be used in a PK/Identity combo. BOL has
a nice comment on that second notion:

"timestamp is used typically as a mechanism for version-stamping table
rows...."

"The value in the timestamp column is updated every time a row
containing a timestamp column is inserted or updated. This property
makes a timestamp column a poor candidate for keys, especially primary
keys. Any update made to the row changes the timestamp value, thereby
changing the key value. If the column is in a primary key, the old key
value is no longer valid, and foreign keys referencing the old value
are no longer valid. If the table is referenced in a dynamic cursor,
all updates change the position of the rows in the cursor. If the
column is in an index key, all updates to the data row also generate
updates of the index.
"|||Oops. Yes, I meant columns!

Thanks for that.

Smewhere I thought I read that having a TimeStamp column would speed up
UPDATE activities on rows.|||"laurenq uantrell" <laurenquantrell@.hotmail.com> wrote in message
news:1128703331.710695.156120@.f14g2000cwb.googlegr oups.com...
> Is there any reason to have a row that is the PK/Identity and a row
> that is datatype Timestamp in the same table?
> Does this in any way help speeding up row updates?

Can't see that adding a timestamp field would speed anything up - in fact
the presence of a field that is auto-populated means that, by definition,
every time you INSERT or UPDATE the row, it'll have to do some work writing
the current date and time to the table.

A timestamp is, of course, useful as a "last modified date/time" indicator
for your applications, though.

D.|||laurenq uantrell wrote:
> Is there any reason to have a row that is the PK/Identity and a row
> that is datatype Timestamp in the same table?
> Does this in any way help speeding up row updates?
> Thanks,
> lq

Timestamp is useful for checking if a row has been updated by someone
else since you read it, so you don't need to check every other column
value and compare.

It's also a good idea if you use Access (and possibly other) front-end
if you have any floating point data types (even dates) as floating point
errors can cause the front end to think the row has been updated by
someone else even if it hasn't. (You've no doubt seen in CDMA,
timestamps recomended as cures for such problems).|||> A timestamp is, of course, useful as a "last modified date/time" indicator
> for your applications, though.

Laurenq referred to the timestamp *data type* in his post. The timestamp
data type is a misnomer because it is not related to date or time. The
system-generated timestamp is simply an 8 byte binary value that is
guaranteed to be unique within a database that is updated automatically
whenever any data in the row changes. Consequently, the primary purpose of
timestamp is for optimistic concurrency checks to see if the row was updated
by another user. For example:

UPDATE MyTable
SET
SomeColumn1 = @.SomeValue1,
SomeColumn2 = @.SomeValue2,
SomeColumn3 = @.SomeValue3
WHERE
MyPK = @.MyPK AND
MyTimestamp = @.OriginalMyTimestamp

IF @.@.ROWCOUNT = 0
BEGIN
RAISERROR ('Data was updated or deleted by another user', 16, 1)
END

--
Hope this helps.

Dan Guzman
SQL Server MVP

"David Cartwright" <dscartwright@.hotmail.com> wrote in message
news:di8842$pq7$1@.nwrdmz02.dmz.ncs.ea.ibs-infra.bt.com...
> "laurenq uantrell" <laurenquantrell@.hotmail.com> wrote in message
> news:1128703331.710695.156120@.f14g2000cwb.googlegr oups.com...
>> Is there any reason to have a row that is the PK/Identity and a row
>> that is datatype Timestamp in the same table?
>> Does this in any way help speeding up row updates?
> Can't see that adding a timestamp field would speed anything up - in fact
> the presence of a field that is auto-populated means that, by definition,
> every time you INSERT or UPDATE the row, it'll have to do some work
> writing the current date and time to the table.
> A timestamp is, of course, useful as a "last modified date/time" indicator
> for your applications, though.
> D.|||The short answer is "No", it makes the table bigger and size will slow
down operations (probably not by much, but some).

The right answer is first, get the logical design right. An IDENTITY
cannot ever be a logical key, so is this thing actually a table at all?
What would the TIMESTAMP mean in your data model?|||laurenq uantrell (laurenquantrell@.hotmail.com) writes:
> Oops. Yes, I meant columns!
> Thanks for that.
> Smewhere I thought I read that having a TimeStamp column would speed up
> UPDATE activities on rows.

As Trevor said - you don't have to check all columns to check for
concurrent updates, so the WHERE clauses of your updates are slightly
faster.

But as Celko pointed out, eight bytes more means bigger table, and degrades
performance.

I would say that timestamp is mainly a booster for development, as it
makes checks for concurrent updates easier to implement.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Pivoting Row Values into Colums

I'd like to get some data which includes month values bound to a data grid. The data is stored in a table like so:

Measure Month Value
A June 10.00
A July 9.00
A Aug 11.00
B Jun 100.00
B Jul 98.00
B Aug 99.00
C Jun 0.75
C Jul 0.8
C Aug 0.91

I need to report the data like this:
Measure Jun Jul August
A 10 9 11
B 100 98 99
C 75% 80% 91%

This was simple in classic ASP. Just use two recordsets, create a new table cell for each month using the first recordset then use the second recordset for each row.

But is there a way to "Pivot" or rotate the data so I can use the DataGrid? It only seems possible if each month has its own column field in table. Each month add a new column.

I can restructure the database, if needed.

I thought about creating a Cube, but that seems to have its own limitations. For example what if I want to add a Column for Quarter and year totals? I don't think it's possible to show multiple planes like that in an query of a cube.

It seems that this might be resolved in the presentation layer or the data layer. Any Suggestions?You can write a view or a stored procedure to produce the result set in your prefered format. The code below will only work when the same month has the same spelling in all rows. E.x., for June, always use either Jun or June. In the following example, I am assuming, Jun, Jul and Aug are used.


Select measure,
MAX(CASE [Month] WHEN 'Jun' THEN Value END) AS Jun,
MAX(CASE [Month] WHEN 'Jul' THEN Value END) AS Jul,
MAX(CASE [Month] WHEN 'Aug' THEN Value END) AS Aug
From YourTable
Group By Measure

Then you can bind the returned result set to your datagrid.

Pivoting a one-many relationship into 1 row

Hey everyone,
this message comes in two forms, the short version and the long detailed
version-- that way hopefully I can get all the help possible as fast as
possible :)
Short version:
I have a table Names(names_id, name1, name2) that I normalized into 2
tables: Name(name_id, name) and ConnectNames(oldName_id, name_id,
number). The number column just indicates if the name was from the name1
column or the name2 column. Given that I've now normalized this, if I
need to get what was once 1 row (for example: 123, Bob, John), it will
appear as 2 rows if I need to get it by the original nameId as follows:
select oldName_id, name, number
from connectNames cn
inner join name n on n.name_id=cn.oldName_id
where oldName_id=123
would now return:
123 Bob 1
123 John 2
What I need however is for this to be displayed inline like it used to
so that it can be returned in one row (because this gets joined to other
tables). The issue: I'm dealing with millions of rows in this table, and
millions of rows in the other tables that ultimately got joined with the
old denormalized Names table. I've tried using a pivot table approach on
the normalized data to get it in 1 row, but it is very slow when
returning large rowcounts. I've tweaked the indexes, but you can only
get so much performance.
How can I do this better?
Thanks a bunch in advance,
DS
Long detailed version:
I started with a table Names(<pk>names_id, name1, name2). Problem is I
needed to search by name, so I normalized this into a names table and a
cross-reference table:
Name(<pk>name_id, name) and ConnectNames(oldName_id, name_id, number).
I've included the code for this at the bottom of the message for how I
went about this.
Hurra for normalization, now is easy to search for a name:
select oldName_id from connectNames cn inner join name n on n.id=cn.name
where name=@.nameToSearch;
This however presented another problem: I need to be able to display
both name1 and name2 in a single row-- I need to pivot what I just
created (thats actually why I sneaked in the number column into the
cross-ref table to make it easy to pivot). A solution I grabbed from
MSDN was to create a view that I could then join onto twice:
create view connectNamesView
select oldName_id,
MIN(CASE number WHEN 1 THEN name_id END) AS name_id1,
MIN(CASE number WHEN 2 THEN name_id END) AS name_id2,
from connectNames
group by oldName_id
select n1.name, n2.name from connectNamesView cnv
inner join name n1 on n1.name_id=cnv.name_id1
inner join name n2 on n2.name_id=cnv.name_id2
The real issue with this though is that as you may imagine this is
pretty resource intensive, especially when you have several million
records in these tables, and when you join oldName_id to another table
with several million records like this:
select id, product, name1, name2
from Owners o -- note: owners has millions of rows too
inner join connectNamesView cnv on cnv.name_id=o.name_id
inner join name n1 on n1.name_id=cnv.name_id1
inner join name n2 on n2.name_id=cnv.name_id2
where id between 10000 and 20000
Running this takes ages when its joined to another table. Is there a
better way to improve performance or to denormalize the results JUST for
display (display them in 2 columns)? (by the way, is denormalize the
correct term for doing what I need to here).
Thanks in advance for the help and reading this long-winded post :)
-DS
To normalise the table Id did the following:
create table tmpName (id int, name varchar(20), number tinyint);
insert into tmpName (id, name, number) select name_id, name1, 1
insert into tmpName (id, name, number) select name_id, name2, 2
create table Name (id int not null identity(1,1), name varchar(20))
insert into Name (name) select distinct name from tmpName
-- at this point the Names table has been normalized; now to create the
one to many relationship:
create table ConnectNames(oldName_id int, name_id int, number tinyint)
insert into ConnectNames(oldName_id, name_id, number)
select t.id, n.id, t.number
from name n
inner join tmpName t on t.name = n.name
drop table tmpNameProviding the relationship is always 1 to 1 (exactly one name for each
old_name), then this might work (untested):
select name as Name1
,Name 2
=(select name
from Names OldNames
where (OdlNames.oldName_id = Names.name_id))
from Names
For a better solution post DDL, sample data, preferably with expected result
s.
ML

Saturday, February 25, 2012

Pivot transform not putting all values on same row

I have a pivot transform that it believe is configured correctly but is not distributing the values accross the columns on the same row. for example.

input:

id seqno codevalue

1 A red

1 B red

2 C blue

2 A green

2 B violet

3 A green

desired output:

id Seq_A Seq_B Seq_C

1 red red null

2 green violet blue

3 green null null

what I am getting:

id Seq_A Seq_B Seq_C

1 red null null

1 null red null

2 green null null

2 null violet null

2 null null blue

3 green null null

I do have the pivot usage for the id column set to 1. I have the pivot usage for seqno column set to 2 and codevalue column set to 3. I have the source column for each of the output columns set to the lineageID of the apprpriate input columns. I have the pivotKey values set for each of the destination columns. A for column Seq_A, B for column Seq_B, C for column Seq_C. All four columns have sortkey positions set; 1 for id, 2 for Seq_A, 3 for Seq_B and 4 for column SEQ_C.

It seems like the id column's pivot usage is not set to 1 like it should but when I check it is 1.

I also have several other pivot transforms in the same data flow and they are working as expected.

I have a suspicion that there is some hidden meta data that is messed up that is over-ridding my settings (just my guess) I have deleted this transform and re-done it several times, checking each configuration value, but still getting the same result.

Need some help or thoughts on making this work.

Thanks

Do you need to trim seqno before using it in a pivot? Might there be trailing white spaces or anything?|||

The input source has the column trimmed to char(1); I have done a dataviewer on the input and the output, and the input looks good. I have also recently (yesterday) installed SP2.

In my package, the pivot transform is after a union all transform. However, I have checked the output of the union all with a data viewer and the data input to the pivot transform looks good.

Earlier in my development of this dataflow, I did have some problems with the data source for this pivot transform, but I fixed it. And then checked the output of the pivot and found that it was not right. That is when I deleted the pivot transform and re-created it. But I still had the problem. I have since re-checked every configuration value in the pivot transform and the upstream and downstream transforms. All look to be configured correctly, but still the Pivot is putting each value on it's own row and not across the same row, for a given id.

Pivot table (was "help")

i have table like

id charge
10 9921
10 4152
10 5879

i want to display this date in one row like

id charge charge charge
10 9921 4125 5879

and i have thousand of record ,which i want to display in this pattern

thanks
makhamI've moved your posting to a new forum, it seems more likely to attract comment in the SQL forum than the New Users and Introductions forum.

How many rows of data are you using, and how many repeating "charge" values? That will make a huge difference in how you approach a solution.

A pivot should really be done by the client, rather than being done by the database server. The client side has tools that are better suited to this kind of task, as well as more direct contact with the user.

-PatP|||i have thousands of rows and charges repeat up to 10 (that is i can have 10 charges value for a single id) an whats about new forum .and how i can go there.

thanks
makham|||this sounds more like a user interface problem rather than a SQL problem

the sql should be fairly straightforward, its a grouping or sub select on first glance

Before we dive into detail SQL statements, what ius you user interface (ie how are you presenting this information to a view (using VB, VC, .NET, Web page or Access)|||You're already in the "new" forum. I moved the thread, but left a "tail" on it that you just "automagically" follow when you clicked on it.

Since HealdM agrees with me, we're 2 for 2 in thinking that you should really handle the grouping on the client side, not at the server. What kind of client side software do you have, because knowing that will make the final choice much easier to make.

-PatP|||if the database is mysql, use the GROUP_CONCAT function

see, not all "grouping" has to be done on the client side :)|||No, you can do grouping on the server using a SQL server just as well as you can with MySQL. My comment was that grouping should not be done on the server, not that grouping could not be done there.

-PatP|||are you suggesting that someone using MySQL should avoid using the GROUP_CONCAT function?

because that's sure what it sounds like you're saying|||When using the GROUP_CONCAT function makes sense, go for it.

Application design is always a balancing act. There is almost never just one way to do something. You have to consider what you are doing, and why you are doing it to know what is the best solution for you in a given circumstance.

If you are producing a static text report, that will always be in one form (completely "cold", not interactive in any way), then GROUP_CONCAT could be a good choice, especially for a pure two-tier environment. If you have application servers involved, or if your user interface is interactive (to the extent that grouping might change), then GROUP_CONCAT probably isn't a good choice because it will force additional round-trips to the app or database server.

-PatP|||i have table like

id charge
10 9921
10 4152
10 5879

i want to display this date in one row like

id charge charge charge
10 9921 4125 5879i'm having a hard time seeing how this grouping might change, pat, or how GROUP_CONCAT will force additional round-trips to the app or database server|||Yes, if I fell into the logical trap of assuming that the posted example was exactly what was being run instead of an example, I'd probably come to the same conclusion. As this example is quite generic, and has nothing to make me think that it is the actual problem, I'm nearly certain that it is simply a snippet of contrived code to give us an idea of what the poster is doing.

In the real world, this kind of problem usually is much more complex because it returns a half dozen or more columns that can be used for grouping, and the UI almost always allows the end user to change the grouping... That is what I was referring to that would cause extra round trips.

-PatP|||Yes, if I fell into the logical trap of assuming that the posted example was exactly what was being run instead of an example, I'd probably come to the same conclusion. As this example is quite generic, and has nothing to make me think that it is the actual problem, I'm nearly certain that it is simply a snippet of contrived code to give us an idea of what the poster is doing. well, i tell you what -- i'm going to continue to answer the questions that are actually asked, as asked, and you can go ahead and answer whatever you think the actual problem might be

perhaps we'll see each other in the same thread again ;)|||As we discussed offline, if the user is going to run that query, exactly that query, and only that query, then you and I agree that there is no harm (and actually minor benefit) in doing a pivot on the server.

I strongly believe that the pivot tools on the client are much better than pivot tools on the server. I also think that the query posted was only an example, and that other columns are probably involved, which makes the scenario I envisioned (with the potential for repeated trips to the server to re-pivot the data) much more likely.

If all of your assumptions hold true, then I agree with you. In tens of thousands of cases that I've seen in almost 30 years of programming, those assumptions probably would hold true in about five of the cases. I'm not willing to bet that this is another of them.

-PatP

Monday, February 20, 2012

Pivot Query - Assigning Row Value Based on Column Name

Hey all,

i have a question for all the SQL Gurus out there. I have tried to think of a way around, it, but i cannot work it out.

I have a set of data: Samples Below:

Item Warehouse FOR1 FOR2 FOR3 FOR4 FOR5 FOR6 FOR7 FOR8 FOR9 FOR10 FOR11 FOR12 FOR13 FOR14

01-0001 010 329 329 335 343 317 331 328 331 31


I have written a Query to Pivot this data like below:

SELECT WAREHOUSE,ITEM, QTY

FROM

(SELECT ITEM,WAREHOUSE,FOR1,FOR2,for3,for4,for5,for6,for7,for8,for9,for10,
for11,for12,for13,for14,for15,for16,for17,for18,for19,for20,for21,
for22,for23,for24 FROM mvxreport.tbldmsForecasttoMovex) p

UNPIVOT

(QTY FOR tbldmsForecasttoMovex IN (FOR1,FOR2,for3,for4,for5,for6,for7,
for8,for9,for10,for11,for12,for13,for14,for15,for16,for17,for18,for19,
for20,for21,for22,for23,for24))AS unpvt

Warehouse Item Qty
010 01-0001 329
010 01-0001 329
010 01-0001 335
010 01-0001 343
010 01-0001 317
010 01-0001 331
010 01-0001 328
010 01-0001 331
010 01-0001 315
010 01-0001 344
010 01-0001 334
010 01-0001 321
010 01-0001 327
010 01-0001 328
010 01-0001 332
010 01-0001 342
010 01-0001 316
010 01-0001 330
010 01-0001 330
010 01-0001 331
010 01-0001 315
010 01-0001 343
010 01-0001 333
010 01-0001 322


I would like to add some more code to the query, so for each FOR% column,
i can put a numeric value in it. The value will be the numbers ,1 - 24 . One for each line as this represents Months Forward.

Example:

Warehouse Item Qty Month
010 01-0001 329 1
010 01-0001 329 2
010 01-0001 335 3
010 01-0001 343 4
010 01-0001 317 5
010 01-0001 331 6
010 01-0001 328 7
010 01-0001 331 8
010 01-0001 315 9
010 01-0001 344 10
010 01-0001 334 11
010 01-0001 321 12
010 01-0001 327 13
010 01-0001 328 14
010 01-0001 332 15
010 01-0001 342 16
010 01-0001 316 17
010 01-0001 330 18
010 01-0001 330 19
010 01-0001 331 20
010 01-0001 315 21
010 01-0001 343 22
010 01-0001 333 23
010 01-0001 322 24


Does anyone know how i can do this?

Many Thnank

Scotty

Use the below query,

Code Block

SELECT WAREHOUSE,ITEM, QTY, replace(monthfor,'For','') as [Month]

FROM

(SELECT ITEM,WAREHOUSE,FOR1,FOR2,for3,for4,for5,for6,for7,for8,for9,for10,

for11,for12,for13,for14,for15,for16,for17,for18,for19,for20,for21,

for22,for23,for24 FROM mvxreport.tbldmsForecasttoMovex) p

UNPIVOT

(QTY FOR monthfor IN (FOR1,FOR2,for3,for4,for5,for6,for7,

for8,for9,for10,for11,for12,for13,for14,for15,for16,for17,for18,for19,

for20,for21,for22,for23,for24))AS unpvt

|||

I think you can use ROW_NUMBER() function, something like that:

select Warehouse,Item, ROW_NUMBER() OVER (ORDER BY item) as Aqty

from

(

SELECT WAREHOUSE,ITEM, QTY

FROM

(SELECT ITEM,WAREHOUSE,FOR1,FOR2,for3,for4,for5,for6,for7,for8,for9,for10,
for11,for12,for13,for14,for15,for16,for17,for18,for19,for20,for21,
for22,for23,for24 FROM mvxreport.tbldmsForecasttoMovex) p

UNPIVOT

(QTY FOR tbldmsForecasttoMovex IN (FOR1,FOR2,for3,for4,for5,for6,for7,
for8,for9,for10,for11,for12,for13,for14,for15,for16,for17,for18,for19,
for20,for21,for22,for23,for24))AS unpvt

)

|||

thats awesome manivannan, once again you come to the rescue!!

thanks mate

Scotty

|||

recome,

select Warehouse,Item, ROW_NUMBER() OVER (ORDER BY item) as Month

from ...

Pivot or Transpose a row

I would like to transpose/pivot a table that exists:

2 1 2 2 5
3 3 3 1 5
2 2 2 1 4
4 4 4 5 1
2 2 2 1 4
2 2 1 5 5

to:

1 1 2

1 2 3

1 3 2

1 4 4

1 5 2

1 6 2

2 1 1

2 2 3

2 3 2

2 4 4

2 5 2

2 6 2

Just looking at the first 6 rows of the new table:

Basically every column is copied to the third column of the new table.( 1,1) becomes (1,3)....(2,1) becomes (2,3)

There are 6 rows in the original table so the second column in the new table is a reoccuring count from 1 to 6. There is a value of 1 in the first column of the new table(first 6 rows).

The logic in the old table is: every column contains 6 answers from a particular user.

In the new table (1,1) represents the user, (1,2) represents the question, (1,3) represents the users answer for the question.

Is there a quick way to do this in Transact sql using a cursor and the and pivot function?

Any help would be appreciated!

There is not such concept as row number in SQL Server, so we will need a criteria to sort the table. Let us suppose that the table has a column with identity property, then we can use:

Code Snippet

create table dbo.t1 (

row_id int not null identity primary key,

c1 int,

c2 int,

c3 int,

c4 int,

c5 int

)

go

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 1, 2, 2, 5)

insert into dbo.t1(c1, c2, c3, c4, c5) values(3, 3, 3, 1, 5)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 2, 1, 4)

insert into dbo.t1(c1, c2, c3, c4, c5) values(4, 4, 4, 5, 1)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 2, 1, 4)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 1, 5, 5)

go

select

b.c1 as c_id,

a.row_id,

case b.c1

when 1 then a.c1

when 2 then a.c2

when 3 then a.c3

when 4 then a.c4

when 5 then a.c5

end as c3

from

dbo.t1 as a

cross join

(

select 1 as c1

union all

select 2 as c1

union all

select 3 as c1

union all

select 4 as c1

union all

select 5 as c1

) as b

order by

c_id,

a.row_id

go

-- SS 2005

select

c_id,

row_id,

val

from

(

select

row_id,

c1 as [1],

c2 as [2],

c3 as [3],

c4 as [4],

c5 as [5]

from

dbo.t1

) as pvt

unpivot

(val for c_id in ([1], [2], [3], [4], [5])) as unpvt

order by

c_id,

row_id

go

drop table dbo.t1

go

AMB

|||

Here is one way:

select n,
row_number() over ( partition by n order by current_timestamp ),
case n when 1 then c1
when 2 then c2
when 3 then c3
when 4 then c4
when 5 then c5
end
from tbl cross join
( select 1 union
select 2 union
select 3 union
select 4 union
select 5 ) D ( n )

Here tbl is your table and c1, c2, ... denotes the columns. The row_number() function is used to number the tables and the order by current_timestamp is simply a shortcut to generate the sequence without any reliance on existing columns.

--

Anith

|||

Hello

This works too:

SELECT IDENTITY(int, 1,1) AS ID_Num, t.*
INTO #TestResults
FROM TestResults t

CREATE TABLE dbo.NewResults (iUser INT, iQuestion INT, iAnswer INT)

DECLARE @.iColumns INT, @.lcSQL VARCHAR(200)
SET @.iColumns = 1

WHILE @.iColumns <= 5 -- Number of result columns in table TestResults
BEGIN
SET @.lcSQL = 'INSERT INTO dbo.NewResults (iUser, iQuestion, iAnswer) SELECT 1, ID_Num, c' + CAST(@.iColumns AS VARCHAR(2)) + ' FROM #TestResults'
EXEC(@.lcSQL)
SET @.iColumns = @.iColumns + 1
END

DROP TABLE #TestResults

SELECT * FROM dbo.NewResults

|||

Anith could you please explain the sql syntax?

over ( partition by n order by current_timestamp

and

D ( n )

thanks!

Pivot or Transpose a row

I would like to transpose/pivot a table that exists:

2 1 2 2 5
3 3 3 1 5
2 2 2 1 4
4 4 4 5 1
2 2 2 1 4
2 2 1 5 5

to:

1 1 2

1 2 3

1 3 2

1 4 4

1 5 2

1 6 2

2 1 1

2 2 3

2 3 2

2 4 4

2 5 2

2 6 2

Just looking at the first 6 rows of the new table:

Basically every column is copied to the third column of the new table.( 1,1) becomes (1,3)....(2,1) becomes (2,3)

There are 6 rows in the original table so the second column in the new table is a reoccuring count from 1 to 6. There is a value of 1 in the first column of the new table(first 6 rows).

The logic in the old table is: every column contains 6 answers from a particular user.

In the new table (1,1) represents the user, (1,2) represents the question, (1,3) represents the users answer for the question.

Is there a quick way to do this in Transact sql using a cursor and the and pivot function?

Any help would be appreciated!

There is not such concept as row number in SQL Server, so we will need a criteria to sort the table. Let us suppose that the table has a column with identity property, then we can use:

Code Snippet

create table dbo.t1 (

row_id int not null identity primary key,

c1 int,

c2 int,

c3 int,

c4 int,

c5 int

)

go

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 1, 2, 2, 5)

insert into dbo.t1(c1, c2, c3, c4, c5) values(3, 3, 3, 1, 5)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 2, 1, 4)

insert into dbo.t1(c1, c2, c3, c4, c5) values(4, 4, 4, 5, 1)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 2, 1, 4)

insert into dbo.t1(c1, c2, c3, c4, c5) values(2, 2, 1, 5, 5)

go

select

b.c1 as c_id,

a.row_id,

case b.c1

when 1 then a.c1

when 2 then a.c2

when 3 then a.c3

when 4 then a.c4

when 5 then a.c5

end as c3

from

dbo.t1 as a

cross join

(

select 1 as c1

union all

select 2 as c1

union all

select 3 as c1

union all

select 4 as c1

union all

select 5 as c1

) as b

order by

c_id,

a.row_id

go

-- SS 2005

select

c_id,

row_id,

val

from

(

select

row_id,

c1 as [1],

c2 as [2],

c3 as [3],

c4 as [4],

c5 as [5]

from

dbo.t1

) as pvt

unpivot

(val for c_id in ([1], [2], [3], [4], [5])) as unpvt

order by

c_id,

row_id

go

drop table dbo.t1

go

AMB

|||

Here is one way:

select n,
row_number() over ( partition by n order by current_timestamp ),
case n when 1 then c1
when 2 then c2
when 3 then c3
when 4 then c4
when 5 then c5
end
from tbl cross join
( select 1 union
select 2 union
select 3 union
select 4 union
select 5 ) D ( n )

Here tbl is your table and c1, c2, ... denotes the columns. The row_number() function is used to number the tables and the order by current_timestamp is simply a shortcut to generate the sequence without any reliance on existing columns.

--

Anith

|||

Hello

This works too:

SELECT IDENTITY(int, 1,1) AS ID_Num, t.*
INTO #TestResults
FROM TestResults t

CREATE TABLE dbo.NewResults (iUser INT, iQuestion INT, iAnswer INT)

DECLARE @.iColumns INT, @.lcSQL VARCHAR(200)
SET @.iColumns = 1

WHILE @.iColumns <= 5 -- Number of result columns in table TestResults
BEGIN
SET @.lcSQL = 'INSERT INTO dbo.NewResults (iUser, iQuestion, iAnswer) SELECT 1, ID_Num, c' + CAST(@.iColumns AS VARCHAR(2)) + ' FROM #TestResults'
EXEC(@.lcSQL)
SET @.iColumns = @.iColumns + 1
END

DROP TABLE #TestResults

SELECT * FROM dbo.NewResults

|||

Anith could you please explain the sql syntax?

over ( partition by n order by current_timestamp

and

D ( n )

thanks!

Pivot fields into a single column

I have a row that looks like this

Year 2006 2007 2008 2009.....etc.

I want a select statement that will get me this

Year

2006

2007

2008

2009

etc.

Try:

Code Snippet

createtable #t (

c1 intnotnull,

c2 intnotnull,

c3 intnotnull,

c4 intnotnull

)

insertinto #t values(2006, 2007, 2008, 2009)

select

case t2.c1

when 1 then t1.c1

when 2 then t1.c2

when 3 then t1.c3

when 4 then t1.c4

endas [Year]

from

#t as t1

crossjoin

(select 1 as [c1] union allselect 2 union allselect 3 union allselect 4)as t2

-- 2005

select

[Year]

from

(select c1, c2, c3, c4from #t)as p

unpivot

([Year] for [c] in(c1, c2, c3, c4))as unpvt

droptable #t

AMB

|||Thanks, but I can't use create table or insert. Appreciate the effort though.|||

WVUProgramer wrote:

Thanks, but I can't use create table or insert. Appreciate the effort though.

You don't need to - he used the create table and insert to generate sample data so that his entire code example would run in its entirety.

You just need to use the select part of it (possibly modified to meet your exact requirements)|||Okay, thanks.|||

For the first select, it says there's no from clause in the select statement following the cross join.

In the second one, it says unable to parse text.

I plugged in my field names (which are f1 through f4) and my table name where the t# is

Perhaps i'm confused about the Coffee and the [Year].

|||

If you copy and paste the script, then execute it, you will see the result is the same you are expecting in your original post.

Can you post the statement you are trying to execute?

AMB

|||

select

case t2.F1

when 1 then t1.F1

when 2 then t1.F2

when 3 then t1.F3

when 4 then t1.F4

end as [Year]

from

table1 as t1

cross join

(select 1 as [F1] union all select 2 union all select 3 union all select 4) as t2

My table name is table1 my fields are F1, F2, F3, F4

Error Message: Error in SELECT clause: expression near 'UNION'.
Missing FROM clause.
Unable to parse query text.

|||

Where are you testing the script: (which application)?

AMB