Monday, March 26, 2012
Please help - Transaction Log not shrinking!
My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
yesterday). DBCC Reindex was issued against all tables as part of
monthly DB maintenance. Since then I have performed a full DB backup,
have performed log backups, then performed log backup with truncate
only option, then reissued the DBCC Shrinkfile command and tried to
shrink the log file via SSMS - nothing happens. The transaction log
file is still over 25 GBs.
This is in SQL Server 2005. I've even changed the recovery model to
Simple, issued the DBCC Shrinkfile - no change, and then switched back
to FULL - still no change.
What am I missing here?First read about backup, and recovery model in Books Online. Then check out
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g2000cwx.googlegroups.com...
> Hi,
> My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
> yesterday). DBCC Reindex was issued against all tables as part of
> monthly DB maintenance. Since then I have performed a full DB backup,
> have performed log backups, then performed log backup with truncate
> only option, then reissued the DBCC Shrinkfile command and tried to
> shrink the log file via SSMS - nothing happens. The transaction log
> file is still over 25 GBs.
> This is in SQL Server 2005. I've even changed the recovery model to
> Simple, issued the DBCC Shrinkfile - no change, and then switched back
> to FULL - still no change.
> What am I missing here?
>|||Try this:
USE MASTER
BACKUP LOG YourDatabase WITH TRUNCATE_ONLY
GO
USE YourDatabase
DBCC SHRINKFILE (YourDatabase _log, 1000)
This should shrink the log file to 1GB
Ozcan
"Tibor Karaszi" wrote:
> First read about backup, and recovery model in Books Online. Then check out
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> <sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g2000cwx.googlegroups.com...
> > Hi,
> >
> > My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
> > yesterday). DBCC Reindex was issued against all tables as part of
> > monthly DB maintenance. Since then I have performed a full DB backup,
> > have performed log backups, then performed log backup with truncate
> > only option, then reissued the DBCC Shrinkfile command and tried to
> > shrink the log file via SSMS - nothing happens. The transaction log
> > file is still over 25 GBs.
> >
> > This is in SQL Server 2005. I've even changed the recovery model to
> > Simple, issued the DBCC Shrinkfile - no change, and then switched back
> > to FULL - still no change.
> >
> > What am I missing here?
> >
>
Please help - Transaction Log not shrinking!
My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
yesterday). DBCC Reindex was issued against all tables as part of
monthly DB maintenance. Since then I have performed a full DB backup,
have performed log backups, then performed log backup with truncate
only option, then reissued the DBCC Shrinkfile command and tried to
shrink the log file via SSMS - nothing happens. The transaction log
file is still over 25 GBs.
This is in SQL Server 2005. I've even changed the recovery model to
Simple, issued the DBCC Shrinkfile - no change, and then switched back
to FULL - still no change.
What am I missing here?First read about backup, and recovery model in Books Online. Then check out
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g2000cwx.googlegroups
.com...
> Hi,
> My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
> yesterday). DBCC Reindex was issued against all tables as part of
> monthly DB maintenance. Since then I have performed a full DB backup,
> have performed log backups, then performed log backup with truncate
> only option, then reissued the DBCC Shrinkfile command and tried to
> shrink the log file via SSMS - nothing happens. The transaction log
> file is still over 25 GBs.
> This is in SQL Server 2005. I've even changed the recovery model to
> Simple, issued the DBCC Shrinkfile - no change, and then switched back
> to FULL - still no change.
> What am I missing here?
>|||First read about backup, and recovery model in Books Online. Then check out
http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
<sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g2000cwx.googlegroups
.com...
> Hi,
> My production DB transaction log has grown to 25 GBs. (It was <8 Gbs
> yesterday). DBCC Reindex was issued against all tables as part of
> monthly DB maintenance. Since then I have performed a full DB backup,
> have performed log backups, then performed log backup with truncate
> only option, then reissued the DBCC Shrinkfile command and tried to
> shrink the log file via SSMS - nothing happens. The transaction log
> file is still over 25 GBs.
> This is in SQL Server 2005. I've even changed the recovery model to
> Simple, issued the DBCC Shrinkfile - no change, and then switched back
> to FULL - still no change.
> What am I missing here?
>|||Try this:
USE MASTER
BACKUP LOG YourDatabase WITH TRUNCATE_ONLY
GO
USE YourDatabase
DBCC SHRINKFILE (YourDatabase _log, 1000)
This should shrink the log file to 1GB
Ozcan
"Tibor Karaszi" wrote:
> First read about backup, and recovery model in Books Online. Then check ou
t
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> <sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g20
00cwx.googlegroups.com...
>|||Try this:
USE MASTER
BACKUP LOG YourDatabase WITH TRUNCATE_ONLY
GO
USE YourDatabase
DBCC SHRINKFILE (YourDatabase _log, 1000)
This should shrink the log file to 1GB
Ozcan
"Tibor Karaszi" wrote:
> First read about backup, and recovery model in Books Online. Then check ou
t
> http://www.karaszi.com/SQLServer/info_dont_shrink.asp.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> <sqlQuest@.gmail.com> wrote in message news:1151589347.687402.206610@.x69g20
00cwx.googlegroups.com...
>sql
PLEASE help - moving transaction log from one disk to another
In an effort to reclaim a large amount of space on disk, I have moved the
logfile from one drive to another using the steps outlined in this article:
http://www.microsoft.com/sql/techinfo/tips/administration/movingtransactionlogs.asp
when I attempted to reattach the database to the new log file on the new
drive, errors were thrown and SQL Server reported that the database was
corrupt.
Further research indicated that I had to hack the sysfiles1 table to change
the filename path to reflect the new drive location. I did this and rebooted
the server. This now created a suspect database.
How can I get this to work properly?
ThanksHi
I think there may be some problems with access rights /
permissions accessing new drive and files.
Detach and Attach usually works fine.
The indications of suspect status points the problem
accessing file / drive.
Thanks
Suri
>--Original Message--
>SQL Server 7 SP3a.
>In an effort to reclaim a large amount of space on disk,
I have moved the
>logfile from one drive to another using the steps
outlined in this article:
>http://www.microsoft.com/sql/techinfo/tips/administration/
movingtransactionlogs.asp
>when I attempted to reattach the database to the new log
file on the new
>drive, errors were thrown and SQL Server reported that
the database was
>corrupt.
>Further research indicated that I had to hack the
sysfiles1 table to change
>the filename path to reflect the new drive location. I
did this and rebooted
>the server. This now created a suspect database.
>How can I get this to work properly?
>Thanks
>
>.
>
Friday, March 23, 2012
Please guide - especially about Time Dimension and approach in general
I have a table which contains all the transaction details for which I am trying to create a CUBE... The explanation below in brackets is only for clarity about each field. Kindly note that I am using the following table as my fact table. Let's call it tblFact
This table contains fields like transaction date, Product (which is sold), Product Family (to which the Product Belongs), Brand (of the product), Qty sold, Unit Price (at which each unit was sold), and some other fields.
I have created a Product dimension based on tblFact. I don't know if this is a good idea or not :confused: Is it okay and am I on the right track or should I base my Product Dimension on some other table (say tblProducts and then in the Cube editor link tblProducts with tblFact based on the ProductID field? Please guide.
Now coming to my last question:
Currently I am also creating my Time Dimension based on tblFact. Is this also a wrong approach?
1. Should I instead create the Time Dimension based on a different table (say tblTime) and again link up tblTime and tblFact in the Cube editor?
2. if yes, then how do I create tblTime from tblFact in a manner that it only contains all the transaction dates.
3. Assuming that I should take the tblTime approach, then should this table (tblTime) also contain the ProductID - representing the Product which was sold for each date in tblTime?
I realize that this is a lenghty post but reply and more importantly guidance from the experienced users will be greatly appreciated becuase I have recently started learning/playing around on the OLAP side of things and I know that this is the time that I get my foundations correct otherwise I'll end up getting used to some bad practice and will find it difficult to change my approach to cube designing later down the road.
So many thanks in advance and I eagerly look forward to reply from someone.No worries mate,
This is what the forum is for...
Ok - Down to what you need to do
When doing the design for a cube I always do a bit of anyalsis first. Looks like you have crack this bit. You know what your dimensions are - Time , product , brand etc This is what you should group on to build your fact table.
Your facts are going to be Qty Sold , Price. This is what you will be suming on with the SQL to build your fact table.
You said "should I base my Product Dimension on some other table (say tblProducts and then in the Cube editor link tblProducts with tblFact based on the ProductID field? Please guide."
This is exactly what a good cube design is based on mate.
I presume the basis of your fact table is a transactions type table.
First thing you need to do is build all your dimension tables.
A table for product dimension table should look something like :
create table tblProduct
(prod_id tinyint ,
prod_txt varchar (255)
)
Don't forget to put in a id for unknown product - just in case you get these in your base transaction table
Build the rest of your dimension tables like this and assign a tinyint composite key to each dimension. What you what is your fact table to be as small as possible in terms of datatypes.
Now once you have this you want to build your fact table.
What you do is take your fact table and join to each of you dimension tables (should be a left outer join) and sum on the qty and unit price and group up accross all your dimensions.
This now should be you fact table that you can reference in Anyalsis Manager. You will have to define all this with in here as well.
Whoo, that was an effort.
Any problems , questions give me a shout
Cheers|||Hi aldo_2003,
Many thanks for the reply. It answers a good number of my questions. Can you kindly advise regarding the remaining question i.e. the quoted portions below:
Build the rest of your dimension tables like this and assign a tinyint composite key to each dimension. What you what is your fact table to be as small as possible in terms of datatypes.
Question: By composite key do you mean define a Primary key in each table? I will do so but was planning to define the data type for my ProductID field (for example) in my tblProducts as int. However I'll follow your advise and instead use the datatype tinyint. Thanks for the tip :)
And my last question hopefully:
Now coming to my last question:
Currently I am also creating my Time Dimension based on tblFact. Is this also a wrong approach?
1. Should I instead create the Time Dimension based on a different table (say tblTime) and again link up tblTime and tblFact in the Cube editor?
2. if yes, then how do I create tblTime from tblFact in a manner that it only contains all the transaction dates.
3. Assuming that I should take the tblTime approach, then should this table (tblTime) also contain the ProductID - representing the Product which was sold for each date in tblTime?
I think Part 2 (above) is easy and all I have to do is make a copy of tblFact but this copy (say tblTime) will only contain the transaction date column (from tblFact). Kindly confirm my understanding.
However it's the answer to part 1 (above) and especially the part 3 above that is requested.
Looking forward to your reply.|||Your welcome,
forgot about the time question
what you want to do is create a tblTime dimension table
should basically have the grandularity that you want to use
you can find scripts on the net that will help you create and manage a time dimension table.
table should look like
create table tblTime
(date_time smalldatetime,
quarter tinyint,
month tinyint,
week tinyint ,
day int
)
populate this table with all the dates in your date range i.e
Jan 1999 12:00am to Jan 2009 12:00am
This is now your time dimension table.
Using anyalsis manager join back on to the fact table.
Anyalsis manager should guide you through the process of creating a time dimension.
Hope this helps
p.s the only reason I used a tinyint is that I assumed you would have no more than 255 products, if you have more then up the datatype to what you need
Cheers|||Hi again,
Don't mean to "over-flatter" but your replies REALLY have been of great help... Here I was trying to build everything (the fact as well as the dimensions) using only a single table and now I am quite clear regarding what's the right approach :)
3 last questions please :rolleyes:
you can find scripts on the net that will help you create and manage a time dimension table.
table should look like
create table tblTime
(date_time smalldatetime,
quarter tinyint,
month tinyint,
week tinyint ,
day int
)
That's another new tip :) Can u kindly guide where I can get these scripts from? I am assuming that these scripts will not only create the table (in a similar structure as you have suggested above) but will also populate the table with the all the desired date ranges e.g. Jan 1999 12:00am to Jan 2009 12:00am.
2nd last question: So my approach which I was assuming for creating tblTime (for the Time dimension was incorrect) i.e. I thought that this will simply contain the ALL the transaction dates from the transaction table (as I described in my previous post). But from your reply my understanding is that this approach is wrong.
and the last question: So the tblTime does not have to store the ProductID?
Sorry for all the botheration.|||No worries buddy
Glad to be of help
The time dimension table is stand alone and does not have to contain any other info other than time info.
I'll see if my collegue know and get back to you
Cheers|||http://www.databasejournal.com/features/mssql/article.php/10894_1466091_6
http://www.winnetmag.com/Article/ArticleID/23142/Windows_23142.html|||Originally posted by aldo_2003
The time dimension table is stand alone and does not have to contain any other info other than time info.
I'll see if my collegue know and get back to you
Cheers
Thanks aldo : for the reply, the link to articles, and also for the clarification about the time dimension's underlying table containing only time related information.
Kindly do let me know if you get a script which not only creates the table for time but also populates it ...
Bless you!|||Check this link http://www.winnetmag.com/Articles/ArticleID/41531/pg/4/4.html for any help.|||Originally posted by Satya
Check this link http://www.winnetmag.com/Articles/ArticleID/41531/pg/4/4.html for any help.
Hi Satya,
Many sincere thanks for the article. I went through it...
However at this stage I am learning the basics (as evident from this post and my questions to aldo) therefore I found he article to be real "HEVY STUFF" and very difficult to digest at the moment. I talks about the MDX world and that's something that I have yet to explore... I know I will have to get into MDX soon but have enough of the basic to get right first :)
I however would like to request if you can help me with another post from me in this forum (Subject" Need help to create my Time Dimension")... it's somewhat related to portion of the discussion in this post but I did not want to unnecessarily prolong this particular thread/post...
Looking forward to your help in my other post.
Thanks again and regards.|||Joozh
Since this has been answered so well i just wanted to chime in and suggest some reading for you
The Data Warehouse Toolkit by Ralph Kimball (http://www.bestwebbuys.com/books/compare/isbn/0471200247/isrc/b-home-search)
this is the second edition of this book revised in 2002 and it is a must have for every OLAP develper.sql
Please explain Transaction for me
Where it shows
// Start a local transaction.
transaction = connection.BeginTransaction("SampleTransaction");
is the "SampleTransaction" nothing more then just a name of the transaction that is being created through this code or is it actually pointing to something?
I believe that I understand the rest.
Oh and in case you have a better way of doing what I am needing here is the SQL Query that is pulling info for me to display on a page
SELECT p.product_ID, p.class_ID, p.category_ID, p.product_name, p.product_desc, p.product_image, p.product_dimension, p.product_o1, p.product_o2,
p.product_o3, p.product_ac, p.product_ph, p.product_photo, pcl.class_name, pca.category_name
FROM products AS p INNER JOIN
productClass AS pcl ON p.class_ID = pcl.class_ID INNER JOIN
productCategories AS pca ON p.category_ID = pca.category_ID
Basically I am using the transaction to perform multiple updates seeing as how I will need to update multiple tables.
privatestaticvoid ExecuteSqlTransaction(string connectionString)
{
using (SqlConnection connection =new SqlConnection(connectionString))
{
connection.Open();SqlCommand command = connection.CreateCommand();
SqlTransaction transaction;// Start a local transaction.
transaction = connection.BeginTransaction("SampleTransaction");// Must assign both transaction object and connection
// to Command object for a pending local transaction
command.Connection = connection;
command.Transaction = transaction;try
{
command.CommandText =
"Insert into Region (RegionID, RegionDescription) VALUES (100, 'Description')";
command.ExecuteNonQuery();
command.CommandText =
"Insert into Region (RegionID, RegionDescription) VALUES (101, 'Description')";
command.ExecuteNonQuery();// Attempt to commit the transaction.
transaction.Commit();
Console.WriteLine("Both records are written to database.");
}
catch (Exception ex)
{
Console.WriteLine("Commit Exception Type: {0}", ex.GetType());
Console.WriteLine(" Message: {0}", ex.Message);// Attempt to roll back the transaction.
try
{
transaction.Rollback();
}
catch (Exception ex2)
{
// This catch block will handle any errors that may have occurred
// on the server that would cause the rollback to fail, such as
// a closed connection.
Console.WriteLine("Rollback Exception Type: {0}", ex2.GetType());
Console.WriteLine(" Message: {0}", ex2.Message);
}
}
}
}
It's a name given to the transaction to be used in RollBack and SavePoint
SqlTransaction.Rollback (String)
Save method.
Read this inRemarksection atMSDNhttp://msdn2.microsoft.com/en-us/library/k1562zz1.aspx
sqlTuesday, March 20, 2012
Plase help
I have a developer who developed the following stored procedure.
My question is since it started with a Begin Transaction and if the record
exists in the EXISTS statement and it will return an ID without commit /
rollback, does he need to either commit or rollback the transaction?
begin transaction
if EXISTS(select OpportunityId from Opportunity where OTN = @.OTN and
CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and CreateDate
= convert(char(11),getdate(),106))
Begin
Select @.OpportunityId = OpportunityId from Opportunity where OTN = @.OTN
and CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and
CreateDate = convert(char(11),getdate(),106)
Return @.OpportunityId
End
else
Begin
Insert into Opportunity (OTN, CustomerName, ResponseDate) VALUES (@.OTN,
@.CustomerName, @.ResponseDate)
If @.@.Error <> 0
Begin
Rollback Transaction
Return -1
End
RETURN SCOPE_IDENTITY( )
Commit Transaction
End
GOEd,
Try moving the BEGIN TRAN down in the code to just prior the DML (INSERT)
statment.
HTH
Jerry
"Ed" <Ed@.discussions.microsoft.com> wrote in message
news:BA6A49DF-BC78-412D-BC86-9D456BFDDBE1@.microsoft.com...
> Hello,
> I have a developer who developed the following stored procedure.
> My question is since it started with a Begin Transaction and if the
> record
> exists in the EXISTS statement and it will return an ID without commit /
> rollback, does he need to either commit or rollback the transaction?
>
>
> begin transaction
> if EXISTS(select OpportunityId from Opportunity where OTN = @.OTN and
> CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and
> CreateDate
> = convert(char(11),getdate(),106))
> Begin
> Select @.OpportunityId = OpportunityId from Opportunity where OTN = @.OTN
> and CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and
> CreateDate = convert(char(11),getdate(),106)
> Return @.OpportunityId
> End
> else
> Begin
> Insert into Opportunity (OTN, CustomerName, ResponseDate) VALUES (@.OTN,
> @.CustomerName, @.ResponseDate)
> If @.@.Error <> 0
> Begin
> Rollback Transaction
> Return -1
> End
> RETURN SCOPE_IDENTITY( )
> Commit Transaction
> End
>
> GO|||There is only one insert statement that I can see. Transaction is not
necessary at all in this case.
It is also not necessary to perform the same query two times in a row...
select the value once, then check to see if it is null.
SELECT @.OpportunityId = OpportunityId FROM Opportunity WHERE OTN = @.OTN
AND CustomerName = @.CustomerName AND ResponseDate = @.ResponseDate AND
CreateDate = convert(char(11),getdate(),106)
IF @.OpportunityID IS NOT NULL Return @.OpportunityId
ELSE
BEGIN
INSERT INTO Opportunity (OTN, CustomerName, ResponseDate) VALUES (@.OTN,
@.CustomerName, @.ResponseDate)
IF @.@.Error <> 0 RETURN -1
ELSE RETURN SCOPE_IDENTITY()
END
END
GO
John Scragg
"Ed" wrote:
> Hello,
> I have a developer who developed the following stored procedure.
> My question is since it started with a Begin Transaction and if the reco
rd
> exists in the EXISTS statement and it will return an ID without commit /
> rollback, does he need to either commit or rollback the transaction?
>
>
> begin transaction
> if EXISTS(select OpportunityId from Opportunity where OTN = @.OTN and
> CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and CreateDa
te
> = convert(char(11),getdate(),106))
> Begin
> Select @.OpportunityId = OpportunityId from Opportunity where OTN = @.OTN
> and CustomerName = @.CustomerName and ResponseDate = @.ResponseDate and
> CreateDate = convert(char(11),getdate(),106)
> Return @.OpportunityId
> End
> else
> Begin
> Insert into Opportunity (OTN, CustomerName, ResponseDate) VALUES (@.OTN,
> @.CustomerName, @.ResponseDate)
> If @.@.Error <> 0
> Begin
> Rollback Transaction
> Return -1
> End
> RETURN SCOPE_IDENTITY( )
> Commit Transaction
> End
>
> GO|||However, the is a risk that the row is deleted after the SELECT and before t
he INSERT. To eliminate
that risk, isolation SERIALIZABLE should be used, which requires a transacti
on (to do its work)...
:-)
Also, the code doesn't catch if the SELECT returns several rows. This is why
I prefer SET @.var =
(SELECT ...). A run-time error will be raised if SELECT returns > 1 row.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"John Scragg" <JohnScragg@.discussions.microsoft.com> wrote in message
news:969D558F-CCA2-4E38-A87F-2B52073A707D@.microsoft.com...
> There is only one insert statement that I can see. Transaction is not
> necessary at all in this case.
> It is also not necessary to perform the same query two times in a row...
> select the value once, then check to see if it is null.
> SELECT @.OpportunityId = OpportunityId FROM Opportunity WHERE OTN = @.OTN
> AND CustomerName = @.CustomerName AND ResponseDate = @.ResponseDate AND
> CreateDate = convert(char(11),getdate(),106)
> IF @.OpportunityID IS NOT NULL Return @.OpportunityId
> ELSE
> BEGIN
> INSERT INTO Opportunity (OTN, CustomerName, ResponseDate) VALUES (@.OTN,
> @.CustomerName, @.ResponseDate)
> IF @.@.Error <> 0 RETURN -1
> ELSE RETURN SCOPE_IDENTITY()
> END
> END
> GO
> John Scragg
> "Ed" wrote:
>|||Tibor, you make some good points. I am afraid I was not as clear and the
formatting of SQL in the newsgroup does not help :-) Let me try to be more
clear.
The original proc was apparently designed to perform 1 of 2 action. Either
(1) return the ID of an existing row, or (2) in the absance of a row, insert
a new row and return the ID of the new row.
If the row is deleted after the select it should not matter (as far as the
proc is concerned) since there is nothing else happening. If the select
returns a value, the proc returns the coulmn value selected and the INSERT
statement is ignored. The way it was written originally there were two
identical selects. First to check IF EXISTS and then the same query to get
the value. This did provide a tiny window for a row to be deleted between
the two selects, but my opinion was that it is simply ineffcient to query
twice for the same result :-)
As for the possible error on multiple rows, you are absolutely right. It
can error. It is the same as the developer's original code, and so I am
assuming that they have confirmed the values in the where clause will provid
e
a unique row (through constraints, rules, etc.). If not, then he has this
problem in his original code as well.
John
"Tibor Karaszi" wrote:
> However, the is a risk that the row is deleted after the SELECT and before
the INSERT. To eliminate
> that risk, isolation SERIALIZABLE should be used, which requires a transac
tion (to do its work)...
> :-)
> Also, the code doesn't catch if the SELECT returns several rows. This is w
hy I prefer SET @.var =
> (SELECT ...). A run-time error will be raised if SELECT returns > 1 row.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "John Scragg" <JohnScragg@.discussions.microsoft.com> wrote in message
> news:969D558F-CCA2-4E38-A87F-2B52073A707D@.microsoft.com...
>|||That is a good point that I never think about...
How am I able to control/put the Transaction to even lock the Select
statement to make sure when the record/ID is returned, there is no one else
delete the record already?
Thanks
Ed
"Tibor Karaszi" wrote:
> However, the is a risk that the row is deleted after the SELECT and before
the INSERT. To eliminate
> that risk, isolation SERIALIZABLE should be used, which requires a transac
tion (to do its work)...
> :-)
> Also, the code doesn't catch if the SELECT returns several rows. This is w
hy I prefer SET @.var =
> (SELECT ...). A run-time error will be raised if SELECT returns > 1 row.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "John Scragg" <JohnScragg@.discussions.microsoft.com> wrote in message
> news:969D558F-CCA2-4E38-A87F-2B52073A707D@.microsoft.com...
>|||Sorry, John. I misinterpreted the logic. It looked like me as if the row exi
sts, get the values
*and* do an insert. I didn't read the original code carefully enough. I catc
hed that you picked up
on the "read twice" issue, but thought that a hole was left as I assumed ano
ther logic. My bad...
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"John Scragg" <JohnScragg@.discussions.microsoft.com> wrote in message
news:7C58CD9A-0D47-4D4D-A68B-15F5D64D65B2@.microsoft.com...
> Tibor, you make some good points. I am afraid I was not as clear and the
> formatting of SQL in the newsgroup does not help :-) Let me try to be mor
e
> clear.
> The original proc was apparently designed to perform 1 of 2 action. Eithe
r
> (1) return the ID of an existing row, or (2) in the absance of a row, inse
rt
> a new row and return the ID of the new row.
> If the row is deleted after the select it should not matter (as far as the
> proc is concerned) since there is nothing else happening. If the select
> returns a value, the proc returns the coulmn value selected and the INSERT
> statement is ignored. The way it was written originally there were two
> identical selects. First to check IF EXISTS and then the same query to ge
t
> the value. This did provide a tiny window for a row to be deleted between
> the two selects, but my opinion was that it is simply ineffcient to query
> twice for the same result :-)
> As for the possible error on multiple rows, you are absolutely right. It
> can error. It is the same as the developer's original code, and so I am
> assuming that they have confirmed the values in the where clause will prov
ide
> a unique row (through constraints, rules, etc.). If not, then he has this
> problem in his original code as well.
> John
> "Tibor Karaszi" wrote:
>