Friday, March 30, 2012
Receiving this error since migrating to SQL 2000 from SQL 7
1- Run the DTS package manually and see where (Which step)
you get the error.
2- Create an output file through Package-->Properties--
>Logging in the DTS package and read the output file for
the error.
3- If you upgraded from SQL Server 7.0 to 2000, run
sp_updatestats stored proc for all the databases in the QA.
>--Original Message--
>Ever since migrating to SQL 2000 from SQL 7 I receive
this error at least
>once a day while running jobs through SQL Agent. It may
not be the same job
>having the problem. Any thoughts?
>Executed as user: domain\user. DTSRun: Loading...
Error: -2147467259
>(80004005); Provider Error: 0 (0) Error string:
Timeout expired
>Error source: Microsoft OLE DB Provider for SQL
Server Help file:
> Help context: 0. Process Exit Code 1. The step failed.
>.
>
The job is only failing once or twice a day even though it runs every hour or
in the case of another job, every 1/2 hour. It is never the same time of
day. I can run the DTS job manually and it work without failing. I will
give the logging a try and the sp_updatestats. I didn't upgrade the server -
it was a clean install and a restore from SQL 7. Thanks for your help
"Mark" wrote:
> Three simple suggestions:
> 1- Run the DTS package manually and see where (Which step)
> you get the error.
> 2- Create an output file through Package-->Properties--
> the error.
> 3- If you upgraded from SQL Server 7.0 to 2000, run
> sp_updatestats stored proc for all the databases in the QA.
>
>
> this error at least
> not be the same job
> Error: -2147467259
> Timeout expired
> Server Help file:
>
Wednesday, March 28, 2012
Receive all messages on queue
Hi i am trying to create a batch process then commit for all messages on the queue. The problem i am having is when i run my query (As below) I only receive the first message and the corresponding end dialog for the message although i have 2000 records sitting in the queue. It is my understanding that receive without any criteria i.e top(1) or where clause should select everything of the queue. I tried receive top(100) expecting 100 records but still only got 2 back.
any help appreciated.
WAITFOR(RECEIVE
queuing_order,
conversation_handle,
message_type_name,
message_body
FROM [RMIS_COMMS_Queue]
INTO @.tableMessages), TIMEOUT 2000;
Each RECEIVE returns only messages belonging to conversation in one conversation group only. If each conversation is its own group (which is true unless you use RELATED_CONVERSATION clause in BEGIN DIALOG or you use MOVE CONVERSATION) then it means you can RECEIVE only one conversation in one call. The TOP clause applies to this resultset (one conversation).
HTH,
~ Remus
I'm using triggers to cause an update of a seperate database via service broker, in our activated stored procedure we are seeing similiar situations, Could you give an example of a script that a trigger might use to take advantage of the RELATED_CONVERSATION clause? I looked at the documentation and didn't see how I could use this in our situation. (trigger on update sends inserted table with for xml clause as message to queue)
Thanks,
Bill
Receive all messages on queue
Hi i am trying to create a batch process then commit for all messages on the queue. The problem i am having is when i run my query (As below) I only receive the first message and the corresponding end dialog for the message although i have 2000 records sitting in the queue. It is my understanding that receive without any criteria i.e top(1) or where clause should select everything of the queue. I tried receive top(100) expecting 100 records but still only got 2 back.
any help appreciated.
WAITFOR(RECEIVE
queuing_order,
conversation_handle,
message_type_name,
message_body
FROM [RMIS_COMMS_Queue]
INTO @.tableMessages), TIMEOUT 2000;
Each RECEIVE returns only messages belonging to conversation in one conversation group only. If each conversation is its own group (which is true unless you use RELATED_CONVERSATION clause in BEGIN DIALOG or you use MOVE CONVERSATION) then it means you can RECEIVE only one conversation in one call. The TOP clause applies to this resultset (one conversation).
HTH,
~ Remus
I'm using triggers to cause an update of a seperate database via service broker, in our activated stored procedure we are seeing similiar situations, Could you give an example of a script that a trigger might use to take advantage of the RELATED_CONVERSATION clause? I looked at the documentation and didn't see how I could use this in our situation. (trigger on update sends inserted table with for xml clause as message to queue)
Thanks,
Bill
Monday, March 26, 2012
Rebuildm failed with Error 266
even have deleted sqlmangr.exe out of Task Manager "Processes". I have
everything closed on the machine. SQL is definitely down - No services show
in "Services". However, when I run rebuildm - after I enter where the data
files are and the new collation and press Rebuild it returns an error. First
One "Cannot copy master: It is being used by another person or program.
Close any programs that might be using the file and try again" When I press
OK I get "Rebuild Master failed with error 266: The copy function cannot be
used. The error occurred due to a file copy failure"
How can there be a connection? - SQL is stopped! No one can connect - the
server isn't even connected to the network. I've got everything closed!
Can you please help me - This is driving me crazy -- Thanx!
Terry
Regina Qu'appelle Health Region
Saskatchewan
CANADA
Try copying the files to disk first and remove the read only
attribute. That's generally what causes the error you are
seeing.
-Sue
On Fri, 30 Sep 2005 15:15:03 -0700, "Terry"
<terry@.microsoft.newsgroups.com> wrote:
>I am trying to run rebuildm to change the collation of SQL. I stop SQL and
>even have deleted sqlmangr.exe out of Task Manager "Processes". I have
>everything closed on the machine. SQL is definitely down - No services show
>in "Services". However, when I run rebuildm - after I enter where the data
>files are and the new collation and press Rebuild it returns an error. First
>One "Cannot copy master: It is being used by another person or program.
>Close any programs that might be using the file and try again" When I press
>OK I get "Rebuild Master failed with error 266: The copy function cannot be
>used. The error occurred due to a file copy failure"
>How can there be a connection? - SQL is stopped! No one can connect - the
>server isn't even connected to the network. I've got everything closed!
>Can you please help me - This is driving me crazy -- Thanx!
|||thanks! - Tried it but...
My db files and rebuildm are on disk
The actual directory tree will not let me remove the RedOnly on the Data dir
Windows Server 2003 doesn't want me to remove it - Apply it but it just
reverts back to Read Only -- tried all the way up the dir structure.
I went ahead anyway and created another branch for the db files and copied
them to it.
I ran rebuildm - it went further but came back with a different but similar
error
"Rebuild Master failed with error -1: the error occurred due to a file copy
failure"
I didn't actually attach the new set of dbs to SQL though ?
Thanx!
Terry
Regina Health Region
Saskatchewan
CANADA
"Sue Hoegemeier" wrote:
> Try copying the files to disk first and remove the read only
> attribute. That's generally what causes the error you are
> seeing.
> -Sue
> On Fri, 30 Sep 2005 15:15:03 -0700, "Terry"
> <terry@.microsoft.newsgroups.com> wrote:
>
>
|||I'm not clear on what you moved where but I doubt it will
work if you split things apart.
Copy the entire x86\Data directory. Select all of the files
in the data directory and remove the read-only attribute.
Then run rebuildm. Use this other data directory as the
source directory for the data files.
-Sue
On Mon, 3 Oct 2005 10:36:04 -0700, "Terry"
<terry@.microsoft.newsgroups.com> wrote:
>thanks! - Tried it but...
>My db files and rebuildm are on disk
>The actual directory tree will not let me remove the RedOnly on the Data dir
>Windows Server 2003 doesn't want me to remove it - Apply it but it just
>reverts back to Read Only -- tried all the way up the dir structure.
>I went ahead anyway and created another branch for the db files and copied
>them to it.
>I ran rebuildm - it went further but came back with a different but similar
>error
>"Rebuild Master failed with error -1: the error occurred due to a file copy
>failure"
>I didn't actually attach the new set of dbs to SQL though ?
>Thanx!
|||We always install SQL onto another drive - usually D:
So a typical install puts files/dirs on both C: and D:
.../Data exists on D: that's what I copied to another directory
and used as a source for Rebuildm
data files are always in d:\Program Files\Microsoft SQL Server\MSSQL\Data
I took those files and copied them to D:\MSSQL\Data which is a new dir
I must be confused about something basic !?!?!
Is my problem due to installing SQL on D: ?
SQL 2000 - SP4 ? Windows Server 2003 ?
Rebuildm is on C:\...
Thanks!
Terry
Regina Health Region
Saskatchewan
CANADA
"Sue Hoegemeier" wrote:
> I'm not clear on what you moved where but I doubt it will
> work if you split things apart.
> Copy the entire x86\Data directory. Select all of the files
> in the data directory and remove the read-only attribute.
> Then run rebuildm. Use this other data directory as the
> source directory for the data files.
> -Sue
> On Mon, 3 Oct 2005 10:36:04 -0700, "Terry"
> <terry@.microsoft.newsgroups.com> wrote:
>
>
|||Yes...using rebuildm has nothing at all to do with whatever
data files you currently have. Ignore whatever you have and
where ever they are located for the rebuildm utility. Just
look at this as having nothing to do with whatever you
already installed.
Copy the entire x86\Data directory from your SQL Server
installation CD to a share. You need all of the original
files from the installation CD.
Select all of the files in the data directory that you
copied from the CD. Right click, select properties and
remove the read-only attribute.
Then run rebuildm. Use this data directory share where you
have just copied the files from the CD as the source
directory for the data files.
It sounds like you are trying to use some of your database
files and that's now how it works. The rebuildm utility
rebuilds the master database and system databases. It the
original files for these database from the CD. Rebuildm gets
you back up and running but you still need to restore you
master, model and msdb databases.
-Sue
On Tue, 4 Oct 2005 08:16:01 -0700, "Terry"
<terry@.microsoft.newsgroups.com> wrote:
>We always install SQL onto another drive - usually D:
>So a typical install puts files/dirs on both C: and D:
>.../Data exists on D: that's what I copied to another directory
>and used as a source for Rebuildm
>data files are always in d:\Program Files\Microsoft SQL Server\MSSQL\Data
>I took those files and copied them to D:\MSSQL\Data which is a new dir
>I must be confused about something basic !?!?!
>Is my problem due to installing SQL on D: ?
>SQL 2000 - SP4 ? Windows Server 2003 ?
>Rebuildm is on C:\...
>Thanks!
|||Thanks Sue!!! -- that's it !!! -- I wasn't understanding the intent of the
utility. I thought it wanted the location of the existing system files.
Terry
Regina Health Region
Saskatchewan
CANADA
"Sue Hoegemeier" wrote:
> Yes...using rebuildm has nothing at all to do with whatever
> data files you currently have. Ignore whatever you have and
> where ever they are located for the rebuildm utility. Just
> look at this as having nothing to do with whatever you
> already installed.
> Copy the entire x86\Data directory from your SQL Server
> installation CD to a share. You need all of the original
> files from the installation CD.
> Select all of the files in the data directory that you
> copied from the CD. Right click, select properties and
> remove the read-only attribute.
> Then run rebuildm. Use this data directory share where you
> have just copied the files from the CD as the source
> directory for the data files.
> It sounds like you are trying to use some of your database
> files and that's now how it works. The rebuildm utility
> rebuilds the master database and system databases. It the
> original files for these database from the CD. Rebuildm gets
> you back up and running but you still need to restore you
> master, model and msdb databases.
> -Sue
> On Tue, 4 Oct 2005 08:16:01 -0700, "Terry"
> <terry@.microsoft.newsgroups.com> wrote:
>
>
|||Glad to hear you got it. Your not the only one who I've seen
have problems with the same issue FWIW...
-Sue
On Wed, 5 Oct 2005 08:16:01 -0700, "Terry"
<terry@.microsoft.newsgroups.com> wrote:
>Thanks Sue!!! -- that's it !!! -- I wasn't understanding the intent of the
>utility. I thought it wanted the location of the existing system files.
Rebuildm Error
My Master DB crashed and i run Rebuildm to rebuild
After setting CD-Rom path runing rebuildm
it show error -1
and i check OS event log
17207 :
udopen: Operation system error 5(access deny)during the creation/opening of physical device d:\sqldata\mssql\data\master.mdf
FCB::Open failed: Could not open device d:\sqldata\mssql\data\master.mdf
for virtual device number (VDN) 1.
what's the problem ?
thxOriginally posted by mmmaya
i use Win AD Server 2000 and SQL server 2000
My Master DB crashed and i run Rebuildm to rebuild
After setting CD-Rom path runing rebuildm
it show error -1
and i check OS event log
17207 :
udopen: Operation system error 5(access deny)during the creation/opening of physical device d:\sqldata\mssql\data\master.mdf
FCB::Open failed: Could not open device d:\sqldata\mssql\data\master.mdf
for virtual device number (VDN) 1.
what's the problem ?
thx
rebuildm needs to be able to write to the path that you specified as the source directory. If fails or hangs because it cannot write to your CD-ROM drive. So you'll need to copy the contents of the data directory to a writable device (eg. C:). See http://support.microsoft.com/default.aspx?scid=kb;en-us;273572|||Originally posted by cbatwillis
rebuildm needs to be able to write to the path that you specified as the source directory. If fails or hangs because it cannot write to your CD-ROM drive. So you'll need to copy the contents of the data directory to a writable device (eg. C:). See http://support.microsoft.com/default.aspx?scid=kb;en-us;273572
Yes!! It's problem with Read-Only !
thank you ^_^
Friday, March 23, 2012
Rebuilding an index and update statistics
My understanding is that once an index is rebuilt, there is no need to
run update statistics afterwards because that is automatically done. Is
this the case? For example, does this make any sense:
DBCC DBREINDEX('TableName')
EXEC ('UPDATE STATISTICS TableName')
Basically we have a vendor insisting that our performance problems with
their product are due to not updating statistics after a data load.
However, we are rebuilding the indexes after the load. Am I correct
here? Any links to MS documentation we could show to the vendor would
be a great help.
Thanks in advance.If the indexes are being rebuilt the stats will be updated automatically
unless you have turned off AUTO UPDATE STATS or set a property of the index
to disallow the updates. As a matter of fact rebuilding with DBREINDEX will
do a FULL scan which gives the most accurate information of the breakdown.
Unless you specify a sample rate Update Stats will do a limited sample by
default. DBCC INDEXDEFRAG does not update the stats on it's own but
DBREINDEX will. You can always run DBCC SHOW_STATISTICS after the DBREINDEX
to see for sure if they are getting updated. YOu can send the vendor the
results and tell them to take a hike<g>.
Andrew J. Kelly SQL MVP
"sqlboy2000" <sqlboy2000@.hotmail.com> wrote in message
news:1104690140.021556.95790@.z14g2000cwz.googlegroups.com...
> Hi all,
> My understanding is that once an index is rebuilt, there is no need to
> run update statistics afterwards because that is automatically done. Is
> this the case? For example, does this make any sense:
> DBCC DBREINDEX('TableName')
> EXEC ('UPDATE STATISTICS TableName')
> Basically we have a vendor insisting that our performance problems with
> their product are due to not updating statistics after a data load.
> However, we are rebuilding the indexes after the load. Am I correct
> here? Any links to MS documentation we could show to the vendor would
> be a great help.
> Thanks in advance.
>|||Hi SQLBoy
To my understanding DBCC REINDEX is equivalent to a DROP/CREATE index
statement. SQL does an update of the statistics after a CREATE INDEX
statement - unless you specify the STATISTCS_NORECOMPUTE option.
Does new data enter the table after you have loaded and rebuild the
indexes - in this case the statistics may slowly become out of date?
Yours sincerely
Thomas Kejser
M.Sc, MCDBA
"sqlboy2000" <sqlboy2000@.hotmail.com> wrote in message
news:1104690140.021556.95790@.z14g2000cwz.googlegroups.com...
> Hi all,
> My understanding is that once an index is rebuilt, there is no need to
> run update statistics afterwards because that is automatically done. Is
> this the case? For example, does this make any sense:
> DBCC DBREINDEX('TableName')
> EXEC ('UPDATE STATISTICS TableName')
> Basically we have a vendor insisting that our performance problems with
> their product are due to not updating statistics after a data load.
> However, we are rebuilding the indexes after the load. Am I correct
> here? Any links to MS documentation we could show to the vendor would
> be a great help.
> Thanks in advance.
>|||Andrew, but even if AUTO UPDATE STATS is OFF on DB the statistics will be
updated during indexes' rebuild anyways.
And what's the name of index property are you referring to?
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:%234S79xP8EHA.2600@.TK2MSFTNGP09.phx.gbl...
> If the indexes are being rebuilt the stats will be updated automatically
> unless you have turned off AUTO UPDATE STATS or set a property of the
index
> to disallow the updates. As a matter of fact rebuilding with DBREINDEX
will
> do a FULL scan which gives the most accurate information of the breakdown.
> Unless you specify a sample rate Update Stats will do a limited sample by
> default. DBCC INDEXDEFRAG does not update the stats on it's own but
> DBREINDEX will. You can always run DBCC SHOW_STATISTICS after the
DBREINDEX
> to see for sure if they are getting updated. YOu can send the vendor the
> results and tell them to take a hike<g>.
>
> --
> Andrew J. Kelly SQL MVP
>
> "sqlboy2000" <sqlboy2000@.hotmail.com> wrote in message
> news:1104690140.021556.95790@.z14g2000cwz.googlegroups.com...
> > Hi all,
> > My understanding is that once an index is rebuilt, there is no need to
> > run update statistics afterwards because that is automatically done. Is
> > this the case? For example, does this make any sense:
> >
> > DBCC DBREINDEX('TableName')
> > EXEC ('UPDATE STATISTICS TableName')
> >
> > Basically we have a vendor insisting that our performance problems with
> > their product are due to not updating statistics after a data load.
> > However, we are rebuilding the indexes after the load. Am I correct
> > here? Any links to MS documentation we could show to the vendor would
> > be a great help.
> >
> > Thanks in advance.
> >
>|||As mentioned in the other posts: the statistics of the indexed columns
will be recalculated when reindexing.
However, only the index statistics will be updated. Any manually or
automatically created column statistics will not be updated. The example
below proves this behavior.
Hope this helps,
Gert-Jan
use northwind
go
select * into Test from orders
alter table Test add constraint PK_Test primary key clustered (OrderID)
select * into Test2 from "order details"
alter table Test2 add constraint PK_Test2 primary key clustered
(OrderID,ProductID)
go
-- used to display the autocreate stats on Test2
create procedure test_showstats as
begin
declare @.sql varchar(4000)
select @.sql='dbcc show_statistics (Test2,'+name+')'
from sysindexes
where id=object_id('Test2')
and name <> 'PK_Test2'
exec (@.sql)
end
go
-- this will auto create stats on Test2.UnitPrice if "autocreate stats"
is turned on
SELECT O.OrderID,CustomerID,Freight
FROM Test O
INNER JOIN Test2 OD
ON OD.OrderID=O.OrderID
WHERE CustomerID >= 'S'
AND UnitPrice >= 40.00
go
-- shows current stats: no rows with UnitPrice=270.00
exec test_showstats
go
insert into Test2 values (10248,15, 270.00 ,1,0.0)
insert into Test2 values (10248,16, 270.00 ,1,0.0)
insert into Test2 values (10248,17, 270.00 ,1,0.0)
insert into Test2 values (10248,18, 270.00 ,1,0.0)
insert into Test2 values (10248,19, 270.00 ,1,0.0)
insert into Test2 values (10248,20, 270.00 ,1,0.0)
insert into Test2 values (10248,21, 270.00 ,1,0.0)
insert into Test2 values (10248,22, 270.00 ,1,0.0)
insert into Test2 values (10248,23, 270.00 ,1,0.0)
insert into Test2 values (10248,24, 270.00 ,1,0.0)
go
dbcc dbreindex(Test2,PK_Test2)
go
-- will show the updated value of 13 rows for OrderID=10248
dbcc show_statistics(test2,pk_test2)
go
-- shows that the stats on column UnitPrice have not been updated
exec test_showstats
go
update statistics Test2
go
-- now the stats show 10 rows with UnitPrice=270.00
exec test_showstats
go
-- cleanup
drop table Test
drop table Test2
drop procedure test_showstats
sqlboy2000 wrote:
> Hi all,
> My understanding is that once an index is rebuilt, there is no need to
> run update statistics afterwards because that is automatically done. Is
> this the case? For example, does this make any sense:
> DBCC DBREINDEX('TableName')
> EXEC ('UPDATE STATISTICS TableName')
> Basically we have a vendor insisting that our performance problems with
> their product are due to not updating statistics after a data load.
> However, we are rebuilding the indexes after the load. Am I correct
> here? Any links to MS documentation we could show to the vendor would
> be a great help.
> Thanks in advance.|||but if you do dbcc dbreindex(Test2, '') instead the column statistics will
be updated as well
"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:41D854D4.9B1CCF3E@.toomuchspamalready.nl...
> As mentioned in the other posts: the statistics of the indexed columns
> will be recalculated when reindexing.
> However, only the index statistics will be updated. Any manually or
> automatically created column statistics will not be updated. The example
> below proves this behavior.
> Hope this helps,
> Gert-Jan
> use northwind
> go
> select * into Test from orders
> alter table Test add constraint PK_Test primary key clustered (OrderID)
> select * into Test2 from "order details"
> alter table Test2 add constraint PK_Test2 primary key clustered
> (OrderID,ProductID)
> go
> -- used to display the autocreate stats on Test2
> create procedure test_showstats as
> begin
> declare @.sql varchar(4000)
> select @.sql='dbcc show_statistics (Test2,'+name+')'
> from sysindexes
> where id=object_id('Test2')
> and name <> 'PK_Test2'
> exec (@.sql)
> end
> go
> -- this will auto create stats on Test2.UnitPrice if "autocreate stats"
> is turned on
> SELECT O.OrderID,CustomerID,Freight
> FROM Test O
> INNER JOIN Test2 OD
> ON OD.OrderID=O.OrderID
> WHERE CustomerID >= 'S'
> AND UnitPrice >= 40.00
> go
> -- shows current stats: no rows with UnitPrice=270.00
> exec test_showstats
> go
> insert into Test2 values (10248,15, 270.00 ,1,0.0)
> insert into Test2 values (10248,16, 270.00 ,1,0.0)
> insert into Test2 values (10248,17, 270.00 ,1,0.0)
> insert into Test2 values (10248,18, 270.00 ,1,0.0)
> insert into Test2 values (10248,19, 270.00 ,1,0.0)
> insert into Test2 values (10248,20, 270.00 ,1,0.0)
> insert into Test2 values (10248,21, 270.00 ,1,0.0)
> insert into Test2 values (10248,22, 270.00 ,1,0.0)
> insert into Test2 values (10248,23, 270.00 ,1,0.0)
> insert into Test2 values (10248,24, 270.00 ,1,0.0)
> go
> dbcc dbreindex(Test2,PK_Test2)
> go
> -- will show the updated value of 13 rows for OrderID=10248
> dbcc show_statistics(test2,pk_test2)
> go
> -- shows that the stats on column UnitPrice have not been updated
> exec test_showstats
> go
> update statistics Test2
> go
> -- now the stats show 10 rows with UnitPrice=270.00
> exec test_showstats
> go
> -- cleanup
> drop table Test
> drop table Test2
> drop procedure test_showstats
>
> sqlboy2000 wrote:
> >
> > Hi all,
> > My understanding is that once an index is rebuilt, there is no need to
> > run update statistics afterwards because that is automatically done. Is
> > this the case? For example, does this make any sense:
> >
> > DBCC DBREINDEX('TableName')
> > EXEC ('UPDATE STATISTICS TableName')
> >
> > Basically we have a vendor insisting that our performance problems with
> > their product are due to not updating statistics after a data load.
> > However, we are rebuilding the indexes after the load. Am I correct
> > here? Any links to MS documentation we could show to the vendor would
> > be a great help.
> >
> > Thanks in advance.|||> Andrew, but even if AUTO UPDATE STATS is OFF on DB the statistics will be
> updated during indexes' rebuild anyways.
Sorry I was thinking ahead of myself.
> And what's the name of index property are you referring to?
There are several ways to disable AutoUpdate stats for an index or table
such as Update Stats with NoRecompute, Create Index with
STATISTICS_NORECOMPUTE etc but the most common is probably sp_autostats.
For more details check BOL under this topic "statistical information,
indexes"
Andrew J. Kelly SQL MVP|||Yes indeed! Thanks for the heads up.
Gert-Jan
Alex wrote:
> but if you do dbcc dbreindex(Test2, '') instead the column statistics will
> be updated as well
>
<snip>
Wednesday, March 21, 2012
Rebuild Master fails
I am trying to test disaster recovery on a test server. I am trying to
create the master as though it has been corrupted. When I run the
Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
installation CD but it fails with an error -" Rebuild Master failed with
error -1:" and in the windows application event error log it says 17204
:FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
I am not able to start the MSSQLSERVR service. I have tried three times
rebuilding the master but it has failed with the same error. Any help will be
greatly appreciated. Thanks.
Hi,
I looked at the file attributes and unchecked the READ ONLY for the data and
log files for master, model and msdb databases. Now when I start the
MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
might be causing this? Thanks.
"sharman" wrote:
> Hi,
> I am trying to test disaster recovery on a test server. I am trying to
> create the master as though it has been corrupted. When I run the
> Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
> C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
> installation CD but it fails with an error -" Rebuild Master failed with
> error -1:" and in the windows application event error log it says 17204
> :FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
> Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
> I am not able to start the MSSQLSERVR service. I have tried three times
> rebuilding the master but it has failed with the same error. Any help will be
> greatly appreciated. Thanks.
|||> Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this?
Just about anything. Check the errorlog file for error messages.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"sharman" <sharman@.discussions.microsoft.com> wrote in message
news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...[vbcol=seagreen]
> Hi,
> I looked at the file attributes and unchecked the READ ONLY for the data and
> log files for master, model and msdb databases. Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this? Thanks.
> "sharman" wrote:
|||Hi,
I was able to figure out the reason. The source file location for the mdf
and ldf files was on a CD with the Read Only attribute checked. I copied all
the files to a local disk, unchecked the Read Only attribute and rerun the
rebuildm.exe and it worked this time. This is in Microsoft KB 273572.
"Tibor Karaszi" wrote:
> Just about anything. Check the errorlog file for error messages.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "sharman" <sharman@.discussions.microsoft.com> wrote in message
> news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...
>
Rebuild Master fails
I am trying to test disaster recovery on a test server. I am trying to
create the master as though it has been corrupted. When I run the
Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
installation CD but it fails with an error -" Rebuild Master failed with
error -1:" and in the windows application event error log it says 17204
:FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
I am not able to start the MSSQLSERVR service. I have tried three times
rebuilding the master but it has failed with the same error. Any help will be
greatly appreciated. Thanks.Hi,
I looked at the file attributes and unchecked the READ ONLY for the data and
log files for master, model and msdb databases. Now when I start the
MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
might be causing this? Thanks.
"sharman" wrote:
> Hi,
> I am trying to test disaster recovery on a test server. I am trying to
> create the master as though it has been corrupted. When I run the
> Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
> C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
> installation CD but it fails with an error -" Rebuild Master failed with
> error -1:" and in the windows application event error log it says 17204
> :FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
> Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
> I am not able to start the MSSQLSERVR service. I have tried three times
> rebuilding the master but it has failed with the same error. Any help will be
> greatly appreciated. Thanks.|||> Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this?
Just about anything. Check the errorlog file for error messages.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"sharman" <sharman@.discussions.microsoft.com> wrote in message
news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...
> Hi,
> I looked at the file attributes and unchecked the READ ONLY for the data and
> log files for master, model and msdb databases. Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this? Thanks.
> "sharman" wrote:
>> Hi,
>> I am trying to test disaster recovery on a test server. I am trying to
>> create the master as though it has been corrupted. When I run the
>> Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
>> C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
>> installation CD but it fails with an error -" Rebuild Master failed with
>> error -1:" and in the windows application event error log it says 17204
>> :FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
>> Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
>> I am not able to start the MSSQLSERVR service. I have tried three times
>> rebuilding the master but it has failed with the same error. Any help will be
>> greatly appreciated. Thanks.|||Hi,
I was able to figure out the reason. The source file location for the mdf
and ldf files was on a CD with the Read Only attribute checked. I copied all
the files to a local disk, unchecked the Read Only attribute and rerun the
rebuildm.exe and it worked this time. This is in Microsoft KB 273572.
"Tibor Karaszi" wrote:
> > Now when I start the
> > MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> > might be causing this?
> Just about anything. Check the errorlog file for error messages.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "sharman" <sharman@.discussions.microsoft.com> wrote in message
> news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...
> > Hi,
> >
> > I looked at the file attributes and unchecked the READ ONLY for the data and
> > log files for master, model and msdb databases. Now when I start the
> > MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> > might be causing this? Thanks.
> >
> > "sharman" wrote:
> >
> >> Hi,
> >>
> >> I am trying to test disaster recovery on a test server. I am trying to
> >> create the master as though it has been corrupted. When I run the
> >> Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
> >> C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
> >> installation CD but it fails with an error -" Rebuild Master failed with
> >> error -1:" and in the windows application event error log it says 17204
> >> :FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
> >> Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
> >>
> >> I am not able to start the MSSQLSERVR service. I have tried three times
> >> rebuilding the master but it has failed with the same error. Any help will be
> >> greatly appreciated. Thanks.
>
Rebuild Master fails
I am trying to test disaster recovery on a test server. I am trying to
create the master as though it has been corrupted. When I run the
Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
installation CD but it fails with an error -" Rebuild Master failed with
error -1:" and in the windows application event error log it says 17204
:FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
I am not able to start the MSSQLSERVR service. I have tried three times
rebuilding the master but it has failed with the same error. Any help will b
e
greatly appreciated. Thanks.Hi,
I looked at the file attributes and unchecked the READ ONLY for the data and
log files for master, model and msdb databases. Now when I start the
MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
might be causing this? Thanks.
"sharman" wrote:
> Hi,
> I am trying to test disaster recovery on a test server. I am trying to
> create the master as though it has been corrupted. When I run the
> Rebuildm.exe, it creates the master.mdf and mastlog.ldf files in the
> C:\Program Files\MS SQL SERVER\DATA\MSSQL\Data directory from the
> installation CD but it fails with an error -" Rebuild Master failed with
> error -1:" and in the windows application event error log it says 17204
> :FCB::Open failed: Could not open device C:\Program Files\Microsoft SQL
> Server\DATA\MSSQL\data\master.mdf for virtual device number (VDN) 1.
> I am not able to start the MSSQLSERVR service. I have tried three times
> rebuilding the master but it has failed with the same error. Any help will
be
> greatly appreciated. Thanks.|||> Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this?
Just about anything. Check the errorlog file for error messages.
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"sharman" <sharman@.discussions.microsoft.com> wrote in message
news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...[vbcol=seagreen]
> Hi,
> I looked at the file attributes and unchecked the READ ONLY for the data a
nd
> log files for master, model and msdb databases. Now when I start the
> MSSQLSERVR service it runs for just 1 - 2 seconds and then it stops. What
> might be causing this? Thanks.
> "sharman" wrote:
>|||Hi,
I was able to figure out the reason. The source file location for the mdf
and ldf files was on a CD with the Read Only attribute checked. I copied all
the files to a local disk, unchecked the Read Only attribute and rerun the
rebuildm.exe and it worked this time. This is in Microsoft KB 273572.
"Tibor Karaszi" wrote:
> Just about anything. Check the errorlog file for error messages.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://sqlblog.com/blogs/tibor_karaszi
>
> "sharman" <sharman@.discussions.microsoft.com> wrote in message
> news:1F7F6AB4-A231-4AA3-B4B4-E4807F0E4B7F@.microsoft.com...
>sql
Tuesday, March 20, 2012
Rebuild Index grew database .mdf files
Overnight, I had a Rebuild Index job run and it grew the size of the database.mdf file approximately 4 times. 300MB to 1.2GB. It also changed the initial size of the database to 1.2GB. Is there any way to recover this space or to shrink the database?
I tried to shrink it with DBCC SHRINKDATABASE to no avail (only recovered 40MB).
Do I have any options to reclaim the space?
Any ideas?
Thanks.It turns out that the cause of the increase in size of database files was due to the fact that I had the Rebuild Index job set with a fill factor of 10%. I tested a bit and found that if I set it to reorganize pages with the default amount of free space that it didn't triple/quadruple/etc the size of the files.
Case Closed!
Friday, March 9, 2012
Reasons why DBCC INDEXDEFRAG would not work
really bad extent scan fragmentation. After I run it though nothing has
seemingly changed. Can someone explain some reasons why this would be
happening?What index id are you looking at? Don't look at index id 0, as this is the d
ata in a heap table
(table without clustered index). In such, there is no order between the rows
, so the value is
meaningless.
If you are looking at an index (index id between 1 and 250), what does logic
al scan fragmentation
say?
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"Andre" <Andre@.discussions.microsoft.com> wrote in message
news:3D1ADA79-7827-48CD-BC51-0AEB4D54F164@.microsoft.com...
>I am performing DBCC INDEXDEFRAG on my indexes in a table because I have
> really bad extent scan fragmentation. After I run it though nothing has
> seemingly changed. Can someone explain some reasons why this would be
> happening?|||In addition to what Tibor stated if you have multiple files in your
filegroup you should pay attention to the Logical Fragmentation.
Andrew J. Kelly SQL MVP
"Andre" <Andre@.discussions.microsoft.com> wrote in message
news:3D1ADA79-7827-48CD-BC51-0AEB4D54F164@.microsoft.com...
>I am performing DBCC INDEXDEFRAG on my indexes in a table because I have
> really bad extent scan fragmentation. After I run it though nothing has
> seemingly changed. Can someone explain some reasons why this would be
> happening?|||Run INDEXDEFRAG again in QA, output to text and paste it here.
"Andre" <Andre@.discussions.microsoft.com> wrote in message
news:3D1ADA79-7827-48CD-BC51-0AEB4D54F164@.microsoft.com...
>I am performing DBCC INDEXDEFRAG on my indexes in a table because I have
> really bad extent scan fragmentation. After I run it though nothing has
> seemingly changed. Can someone explain some reasons why this would be
> happening?|||I think you mean DBCC SHOWCONTIG?
Andrew J. Kelly SQL MVP
"JT" <someone@.microsoft.com> wrote in message
news:OPK0LivjFHA.3936@.TK2MSFTNGP10.phx.gbl...
> Run INDEXDEFRAG again in QA, output to text and paste it here.
> "Andre" <Andre@.discussions.microsoft.com> wrote in message
> news:3D1ADA79-7827-48CD-BC51-0AEB4D54F164@.microsoft.com...
>|||Here is the output of DBCC SHOWCONTIG (Faxes) with all_indexes
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 1, database ID: 5
TABLE level scan performed.
- Pages Scanned........................: 31251
- Extents Scanned.......................: 3921
- Extent Switches.......................: 3929
- Avg. Pages per Extent..................: 8.0
- Scan Density [Best Count:Actual Count]......: 99.41% [3907:3930]
- Logical Scan Fragmentation ..............: 0.04%
- Extent Scan Fragmentation ...............: 0.51%
- Avg. Bytes Free per Page................: 748.9
- Avg. Page Density (full)................: 90.75%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 2, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 5482
- Extents Scanned.......................: 696
- Extent Switches.......................: 704
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 97.30% [686:705]
- Logical Scan Fragmentation ..............: 0.29%
- Extent Scan Fragmentation ...............: 1.44%
- Avg. Bytes Free per Page................: 822.8
- Avg. Page Density (full)................: 89.83%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 3, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 10091
- Extents Scanned.......................: 1276
- Extent Switches.......................: 1353
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 93.21% [1262:1354]
- Logical Scan Fragmentation ..............: 0.50%
- Extent Scan Fragmentation ...............: 1.18%
- Avg. Bytes Free per Page................: 800.6
- Avg. Page Density (full)................: 90.11%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 4, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 6900
- Extents Scanned.......................: 873
- Extent Switches.......................: 974
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 88.51% [863:975]
- Logical Scan Fragmentation ..............: 0.96%
- Extent Scan Fragmentation ...............: 2.75%
- Avg. Bytes Free per Page................: 867.0
- Avg. Page Density (full)................: 89.29%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 5, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 7654
- Extents Scanned.......................: 966
- Extent Switches.......................: 1071
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 89.27% [957:1072]
- Logical Scan Fragmentation ..............: 0.78%
- Extent Scan Fragmentation ...............: 2.17%
- Avg. Bytes Free per Page................: 855.0
- Avg. Page Density (full)................: 89.44%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 6, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 9499
- Extents Scanned.......................: 1197
- Extent Switches.......................: 1325
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 89.59% [1188:1326]
- Logical Scan Fragmentation ..............: 0.88%
- Extent Scan Fragmentation ...............: 38.43%
- Avg. Bytes Free per Page................: 851.5
- Avg. Page Density (full)................: 89.48%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 7, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 7911
- Extents Scanned.......................: 999
- Extent Switches.......................: 1088
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 90.82% [989:1089]
- Logical Scan Fragmentation ..............: 0.70%
- Extent Scan Fragmentation ...............: 60.26%
- Avg. Bytes Free per Page................: 856.8
- Avg. Page Density (full)................: 89.41%
DBCC SHOWCONTIG scanning 'Faxes' table...
Table: 'Faxes' (1893581784); index ID: 8, database ID: 5
LEAF level scan performed.
- Pages Scanned........................: 8900
- Extents Scanned.......................: 1124
- Extent Switches.......................: 1249
- Avg. Pages per Extent..................: 7.9
- Scan Density [Best Count:Actual Count]......: 89.04% [1113:1250]
- Logical Scan Fragmentation ..............: 0.79%
- Extent Scan Fragmentation ...............: 58.19%
- Avg. Bytes Free per Page................: 836.1
- Avg. Page Density (full)................: 89.67%
DBCC execution completed. If DBCC printed error messages, contact your
system administrator.
"Tibor Karaszi" wrote:
> What index id are you looking at? Don't look at index id 0, as this is the
data in a heap table
> (table without clustered index). In such, there is no order between the ro
ws, so the value is
> meaningless.
> If you are looking at an index (index id between 1 and 250), what does log
ical scan fragmentation
> say?
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
> Blog: http://solidqualitylearning.com/blogs/tibor/
>
> "Andre" <Andre@.discussions.microsoft.com> wrote in message
> news:3D1ADA79-7827-48CD-BC51-0AEB4D54F164@.microsoft.com...
>|||That too.
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OBNVykvjFHA.3064@.TK2MSFTNGP15.phx.gbl...
>I think you mean DBCC SHOWCONTIG?
> --
> Andrew J. Kelly SQL MVP
>
> "JT" <someone@.microsoft.com> wrote in message
> news:OPK0LivjFHA.3936@.TK2MSFTNGP10.phx.gbl...
>|||Your Logical fragmentation is fine and is what you should be concerned with.
I suspect you have more than 1 file in that filegroup and that is what is
causing the higher numbers of Physical Fragmentation. Is that true? What
does sp_helpdb 'yourDB' show? By the way do you really need 8 indexes on a
Fax table?
Andrew J. Kelly SQL MVP
"Andre" <Andre@.discussions.microsoft.com> wrote in message
news:A8CFC75C-77C5-4A61-857F-DA43BA4FC451@.microsoft.com...
> Here is the output of DBCC SHOWCONTIG (Faxes) with all_indexes
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 1, database ID: 5
> TABLE level scan performed.
> - Pages Scanned........................: 31251
> - Extents Scanned.......................: 3921
> - Extent Switches.......................: 3929
> - Avg. Pages per Extent..................: 8.0
> - Scan Density [Best Count:Actual Count]......: 99.41% [3907:3930]
> - Logical Scan Fragmentation ..............: 0.04%
> - Extent Scan Fragmentation ...............: 0.51%
> - Avg. Bytes Free per Page................: 748.9
> - Avg. Page Density (full)................: 90.75%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 2, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 5482
> - Extents Scanned.......................: 696
> - Extent Switches.......................: 704
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 97.30% [686:705]
> - Logical Scan Fragmentation ..............: 0.29%
> - Extent Scan Fragmentation ...............: 1.44%
> - Avg. Bytes Free per Page................: 822.8
> - Avg. Page Density (full)................: 89.83%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 3, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 10091
> - Extents Scanned.......................: 1276
> - Extent Switches.......................: 1353
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 93.21% [1262:1354]
> - Logical Scan Fragmentation ..............: 0.50%
> - Extent Scan Fragmentation ...............: 1.18%
> - Avg. Bytes Free per Page................: 800.6
> - Avg. Page Density (full)................: 90.11%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 4, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 6900
> - Extents Scanned.......................: 873
> - Extent Switches.......................: 974
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 88.51% [863:975]
> - Logical Scan Fragmentation ..............: 0.96%
> - Extent Scan Fragmentation ...............: 2.75%
> - Avg. Bytes Free per Page................: 867.0
> - Avg. Page Density (full)................: 89.29%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 5, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 7654
> - Extents Scanned.......................: 966
> - Extent Switches.......................: 1071
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 89.27% [957:1072]
> - Logical Scan Fragmentation ..............: 0.78%
> - Extent Scan Fragmentation ...............: 2.17%
> - Avg. Bytes Free per Page................: 855.0
> - Avg. Page Density (full)................: 89.44%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 6, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 9499
> - Extents Scanned.......................: 1197
> - Extent Switches.......................: 1325
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 89.59% [1188:1326]
> - Logical Scan Fragmentation ..............: 0.88%
> - Extent Scan Fragmentation ...............: 38.43%
> - Avg. Bytes Free per Page................: 851.5
> - Avg. Page Density (full)................: 89.48%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 7, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 7911
> - Extents Scanned.......................: 999
> - Extent Switches.......................: 1088
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 90.82% [989:1089]
> - Logical Scan Fragmentation ..............: 0.70%
> - Extent Scan Fragmentation ...............: 60.26%
> - Avg. Bytes Free per Page................: 856.8
> - Avg. Page Density (full)................: 89.41%
> DBCC SHOWCONTIG scanning 'Faxes' table...
> Table: 'Faxes' (1893581784); index ID: 8, database ID: 5
> LEAF level scan performed.
> - Pages Scanned........................: 8900
> - Extents Scanned.......................: 1124
> - Extent Switches.......................: 1249
> - Avg. Pages per Extent..................: 7.9
> - Scan Density [Best Count:Actual Count]......: 89.04% [1113:1250]
> - Logical Scan Fragmentation ..............: 0.79%
> - Extent Scan Fragmentation ...............: 58.19%
> - Avg. Bytes Free per Page................: 836.1
> - Avg. Page Density (full)................: 89.67%
> DBCC execution completed. If DBCC printed error messages, contact your
> system administrator.
> "Tibor Karaszi" wrote:
>|||No. I have a primary file group with one file and an index file group with
one file.
"Andrew J. Kelly" wrote:
> Your Logical fragmentation is fine and is what you should be concerned wit
h.
> I suspect you have more than 1 file in that filegroup and that is what is
> causing the higher numbers of Physical Fragmentation. Is that true? What
> does sp_helpdb 'yourDB' show? By the way do you really need 8 indexes on
a
> Fax table?
> --
> Andrew J. Kelly SQL MVP
>
> "Andre" <Andre@.discussions.microsoft.com> wrote in message
> news:A8CFC75C-77C5-4A61-857F-DA43BA4FC451@.microsoft.com...
>
>|||I just realized you are using INDEXDEFRAG. Have you tried using DBCC
DBREINDEX instead? In either case I wouldn't worry too much about it and
concentrate on the logical fragmentation instead.
Andrew J. Kelly SQL MVP
"Andre" <Andre@.discussions.microsoft.com> wrote in message
news:578E75D1-9A3B-434C-8166-4871D498FB91@.microsoft.com...
> No. I have a primary file group with one file and an index file group
> with
> one file.
> "Andrew J. Kelly" wrote:
>
reasons for allocation/consistency errors
when we run DBCC CHECKDB ? Using SQL 2000.
Basically looking for all possible causes
That is kind of a big question, and is probably why no one answers... None
of us have a list of all of the reasons...
In my experience, if you run your DBCC without going to read only or single
user mode, and others are using the system for insert, update, delete during
the dbcc, you will probably get some errors. These are probably due to
changes made by the other users... If you get such an error, rerun dbcc on
the table in question... if the error disappears, or moves to another page,
consider it a transient error and ignore it... If the same error occurs on
the same page, it is a hard error and work to fix it..
Otherwise the most common reasons for DBCC errors is hardware failures.
These can include memory chip failures, disk controllers, and disk drives...
Often re-occuring dbcc errors are the first warning you will get regarding
an impending hardware failure...
Software bugs in the OS and SQL can also cause corruptions, but this occurs
less frequently.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:unQs66aIEHA.2688@.tk2msftngp13.phx.gbl...
> What are the reasons for why me may see allocation or consistency errors
> when we run DBCC CHECKDB ? Using SQL 2000.
> Basically looking for all possible causes
>
>
reasons for allocation/consistency errors
when we run DBCC CHECKDB ? Using SQL 2000.
Basically looking for all possible causesThat is kind of a big question, and is probably why no one answers... None
of us have a list of all of the reasons...
In my experience, if you run your DBCC without going to read only or single
user mode, and others are using the system for insert, update, delete during
the dbcc, you will probably get some errors. These are probably due to
changes made by the other users... If you get such an error, rerun dbcc on
the table in question... if the error disappears, or moves to another page,
consider it a transient error and ignore it... If the same error occurs on
the same page, it is a hard error and work to fix it..
Otherwise the most common reasons for DBCC errors is hardware failures.
These can include memory chip failures, disk controllers, and disk drives...
Often re-occuring dbcc errors are the first warning you will get regarding
an impending hardware failure...
Software bugs in the OS and SQL can also cause corruptions, but this occurs
less frequently.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:unQs66aIEHA.2688@.tk2msftngp13.phx.gbl...
> What are the reasons for why me may see allocation or consistency errors
> when we run DBCC CHECKDB ? Using SQL 2000.
> Basically looking for all possible causes
>
>
reasons for allocation/consistency errors
when we run DBCC CHECKDB ? Using SQL 2000.
Basically looking for all possible causesThat is kind of a big question, and is probably why no one answers... None
of us have a list of all of the reasons...
In my experience, if you run your DBCC without going to read only or single
user mode, and others are using the system for insert, update, delete during
the dbcc, you will probably get some errors. These are probably due to
changes made by the other users... If you get such an error, rerun dbcc on
the table in question... if the error disappears, or moves to another page,
consider it a transient error and ignore it... If the same error occurs on
the same page, it is a hard error and work to fix it..
Otherwise the most common reasons for DBCC errors is hardware failures.
These can include memory chip failures, disk controllers, and disk drives...
Often re-occuring dbcc errors are the first warning you will get regarding
an impending hardware failure...
Software bugs in the OS and SQL can also cause corruptions, but this occurs
less frequently.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:unQs66aIEHA.2688@.tk2msftngp13.phx.gbl...
> What are the reasons for why me may see allocation or consistency errors
> when we run DBCC CHECKDB ? Using SQL 2000.
> Basically looking for all possible causes
>
>
Wednesday, March 7, 2012
really strange performance problem
I have a sql2000 server SP3 and I mgrate a databse from SQL7. I run teo basically equal select: he first one 2 seconds the second one 58 minutes.....
select count(*) from uu_resume_ses_dummy_dummy
where substring(dominio,1,20)
not in (select substring(col018_dominio,1,20)
from iis_uu_diario_resume where substring(col018_dominio,1,20)
= substring(uu_resume_ses_dummy_dummy.dominio,1,20))
option (maxdop 1)
select count(*) from uu_resume_ses_dummy_dummy
where substring(dominio,1,30)
not in (select substring(col018_dominio,1,30)
from iis_uu_diario_resume where substring(col018_dominio,1,30)
= substring(uu_resume_ses_dummy_dummy.dominio,1,30))
option (maxdop 1)
the only differencei s that the substring range: 20 to 30. Notice that the limit is not fixed. SOmetimes the jump in execution time happende when I change from 90 top 91......
really I dont' know. (Fields ara varchar(90) but it was the same with varchar(255). the PLAN are exactly the same. in the second case the CPU was 50% fror 58 minutes fixed.
thanks for all the help (really needed)essentialy, for every record in the "uu_resume_ses_dummy_dummy" table you are looking at every record in the "iis_uu_diario_resume" table Using only one processor.
Since you will be looking at every record you have the potential of being delayed by locks, index leaf splits and other traffic. What happens if you run these selects on a quiet system. I suspect the time diffrence is small.|||I was runnng these queries both in a "busy" server (4 cpu, 4Gb RAM) and on a really quiet server (2 CPU, 4GB RAM) with same timing. Quiet server means that basically % of CPU without that select was netween 0 and 5%|||forgot things.
1) same times without the option of processing in one CPU only
2) both table are index on the specific fields.
what you say is ok. problem is:why almost identical queries have such a big big big difference in execution time?
Really really slow cursor
My developers proc left to run all night consumes tons of cpu but does no
updates at all and I have to kill the connection in the morning.
The select in the cursor declaration returns 357 rows so not an outrageous
result set. When I run the select in Qry Analyser it runs in under 6 seconds
.
When I run the stored proc in QA it takes forever. Debug print statements
appear up until the initial fetch and then nothing.
Can a cursor loop indefinitely?
CREATE PROCEDURE dbo.stc_Insert_Instrument AS
set nocount on
declare
@.rc int -- returncode
, @.errmsg varchar(250)
, @.msg varchar(50)
select @.errmsg = 'Error in proc ' + object_name(@.@.procid) + ': '
-- create temp tables
select
<Snip>
into #moodystaging
from moodystaging
select
<Snip>
into #spstaging
from spstaging
declare ins_cursor cursor read_only for
select
iss.IssuerId,
st.SECURITY_DES,
case isdate(st.maturity)
when 1 then st.maturity
else null
end as maturity,
case isdate(st.issue_dt)
when 1 then st.issue_dt
else null
end as issue_dt,
case isnumeric(st.cpn)
when 1 then st.cpn
else null
end as cpn,
cp.CouponTypeId,
st.CRNCY,
case isnumeric(st.AMT_ISSUED)
when 1 then st.AMT_ISSUED
else null
end as amt_issued,
case isnumeric(st.AMT_OUTSTANDING)
when 1 then st.AMT_OUTSTANDING
else null
end as amt_outstanding,
r.RatingId,
sprt.RatingId as spRatingId,
st.id_bb_unique,
st.id_isin,
st.id_cusip
from
staging st
left JOIN Issuer iss on st.ID_BB_COMPANY = iss.SourceId
INNER JOIN CouponType cp on st.CPN_TYP = cp.CouponTypeName
left join external e on st.id_bb_unique = e.externalid and
e.externaltype='BB'
left join #moodystaging mt on st.ID_BB_UNIQUE = mt.ID_BB_UNIQUE
left join rating r on mt.rtg = r.rating and r.type = 'Moody'
left join #spstaging spt on st.ID_BB_UNIQUE = spt.ID_BB_UNIQUE
left join rating sprt on spt.rtg = sprt.rating and sprt.type = 'Sp'
where
e.externalid is null
and iss.sourceid is not null
open ins_cursor
declare
@.issuerid int,
@.instrument_name varchar(50),
@.maturitydate datetime,
@.issuedate datetime,
@.coupon float(8),
@.coupontypeid int,
@.currency varchar(3),
@.amountissued float(8),
@.amountoutstanding float(8),
@.moodyratingid int,
@.spratingid int,
@.bb_id varchar(50),
@.isin varchar(50),
@.cusip varchar(50),
@.instrumentid int
fetch next from ins_cursor into
@.issuerid,
@.instrument_name,
@.maturitydate,
@.issuedate,
@.coupon,
@.coupontypeid,
@.currency,
@.amountissued,
@.amountoutstanding,
@.moodyratingid,
@.spratingid,
@.bb_id,
@.isin,
@.cusip
print 'after first fetch'
while @.@.fetch_status = 0
begin
print 'processing'
insert into instrument (
issuerid,
instrumentname,
maturitydate,
issuedate,
coupon,
coupontypeid,
currency,
amountissued,
amountoutstanding,
moodyratingid,
spratingid)
values (
@.issuerid,
@.instrument_name,
@.maturitydate,
@.issuedate,
@.coupon,
@.coupontypeid,
@.currency,
@.amountissued,
@.amountoutstanding,
@.moodyratingid,
@.spratingid)
-- check for errors
select @.rc = @.@.error
if @.rc <> 0
begin
select @.msg = 'Insert into Instrument failed.'
goto errhandler
end
set @.instrumentid = @.@.identity
insert into external (instrumentid, externaltype, externalid) values
(@.instrumentid, 'BB', @.bb_id)
-- check for errors
select @.rc = @.@.error
if @.rc <> 0
begin
select @.msg = 'Insert into External failed for BB type.'
goto errhandler
end
insert into external (instrumentid, externaltype, externalid) values
(@.instrumentid, 'ISIN', @.isin)
-- check for errors
select @.rc = @.@.error
if @.rc <> 0
begin
select @.msg = 'Insert into External failed for ISIN type.'
goto errhandler
end
insert into external (instrumentid, externaltype, externalid) values
(@.instrumentid, 'Cusip', @.cusip)
-- check for errors
select @.rc = @.@.error
if @.rc <> 0
begin
select @.msg = 'Insert into External failed for Cusip type.'
goto errhandler
end
fetch next from ins_cursor into
@.issuerid,
@.instrument_name,
@.maturitydate,
@.issuedate,
@.coupon,
@.coupontypeid,
@.currency,
@.amountissued,
@.amountoutstanding,
@.moodyratingid,
@.spratingid,
@.bb_id,
@.isin,
@.cusip
end
close ins_cursor
deallocate ins_cursor
drop table #moodystaging
drop table #spstaging
return 0 -- success
errhandler:
raiserror ('%s %s',16,1,@.errmsg,@.msg)
if @.@.trancount > 0
rollback transaction
drop table #moodystaging
drop table #spstaging
return @.rc
GOMaybe one of the tables that you are updating is locked and the stored
procedure is waiting for the lock to free up?
"Si" <Si@.discussions.microsoft.com> wrote in message
news:4A962893-F4D1-4BDA-814E-9DB0E891EA6F@.microsoft.com...
> I'm stumped on this.
> My developers proc left to run all night consumes tons of cpu but does no
> updates at all and I have to kill the connection in the morning.
> The select in the cursor declaration returns 357 rows so not an outrageous
> result set. When I run the select in Qry Analyser it runs in under 6
seconds.
> When I run the stored proc in QA it takes forever. Debug print statements
> appear up until the initial fetch and then nothing.
> Can a cursor loop indefinitely?
>
>
> CREATE PROCEDURE dbo.stc_Insert_Instrument AS
> set nocount on
> declare
> @.rc int -- returncode
> , @.errmsg varchar(250)
> , @.msg varchar(50)
>
> select @.errmsg = 'Error in proc ' + object_name(@.@.procid) + ': '
> -- create temp tables
> select
> <Snip>
> into #moodystaging
> from moodystaging
>
> select
> <Snip>
> into #spstaging
> from spstaging
>
> declare ins_cursor cursor read_only for
> select
> iss.IssuerId,
> st.SECURITY_DES,
> case isdate(st.maturity)
> when 1 then st.maturity
> else null
> end as maturity,
> case isdate(st.issue_dt)
> when 1 then st.issue_dt
> else null
> end as issue_dt,
> case isnumeric(st.cpn)
> when 1 then st.cpn
> else null
> end as cpn,
> cp.CouponTypeId,
> st.CRNCY,
> case isnumeric(st.AMT_ISSUED)
> when 1 then st.AMT_ISSUED
> else null
> end as amt_issued,
> case isnumeric(st.AMT_OUTSTANDING)
> when 1 then st.AMT_OUTSTANDING
> else null
> end as amt_outstanding,
> r.RatingId,
> sprt.RatingId as spRatingId,
> st.id_bb_unique,
> st.id_isin,
> st.id_cusip
> from
> staging st
> left JOIN Issuer iss on st.ID_BB_COMPANY = iss.SourceId
> INNER JOIN CouponType cp on st.CPN_TYP = cp.CouponTypeName
> left join external e on st.id_bb_unique = e.externalid and
> e.externaltype='BB'
> left join #moodystaging mt on st.ID_BB_UNIQUE = mt.ID_BB_UNIQUE
> left join rating r on mt.rtg = r.rating and r.type = 'Moody'
> left join #spstaging spt on st.ID_BB_UNIQUE = spt.ID_BB_UNIQUE
> left join rating sprt on spt.rtg = sprt.rating and sprt.type = 'Sp'
> where
> e.externalid is null
> and iss.sourceid is not null
>
> open ins_cursor
> declare
> @.issuerid int,
> @.instrument_name varchar(50),
> @.maturitydate datetime,
> @.issuedate datetime,
> @.coupon float(8),
> @.coupontypeid int,
> @.currency varchar(3),
> @.amountissued float(8),
> @.amountoutstanding float(8),
> @.moodyratingid int,
> @.spratingid int,
> @.bb_id varchar(50),
> @.isin varchar(50),
> @.cusip varchar(50),
> @.instrumentid int
> fetch next from ins_cursor into
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid,
> @.bb_id,
> @.isin,
> @.cusip
> print 'after first fetch'
> while @.@.fetch_status = 0
> begin
> print 'processing'
> insert into instrument (
> issuerid,
> instrumentname,
> maturitydate,
> issuedate,
> coupon,
> coupontypeid,
> currency,
> amountissued,
> amountoutstanding,
> moodyratingid,
> spratingid)
> values (
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into Instrument failed.'
> goto errhandler
> end
> set @.instrumentid = @.@.identity
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'BB', @.bb_id)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for BB type.'
> goto errhandler
> end
>
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'ISIN', @.isin)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for ISIN type.'
> goto errhandler
> end
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'Cusip', @.cusip)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for Cusip type.'
> goto errhandler
> end
> fetch next from ins_cursor into
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid,
> @.bb_id,
> @.isin,
> @.cusip
> end
> close ins_cursor
> deallocate ins_cursor
> drop table #moodystaging
> drop table #spstaging
> return 0 -- success
> errhandler:
> raiserror ('%s %s',16,1,@.errmsg,@.msg)
> if @.@.trancount > 0
> rollback transaction
> drop table #moodystaging
> drop table #spstaging
> return @.rc
> GO
>|||One thing is clear - this can be done without a cursor. Or have I missed
something?
What else is going on in there during the night?
ML
http://milambda.blogspot.com/|||The reason for the cursor was to extract the identity value from the
instrument table to enter it into the external table. (Sorry, I didn't
include table definitions)
I'd be very happy if there isanother way to achieve this and get rid of the
curse, I mean cursor!
There is nothing else going on overnight apart from backups.
"ML" wrote:
> One thing is clear - this can be done without a cursor. Or have I missed
> something?
> What else is going on in there during the night?
>
> ML
> --
> http://milambda.blogspot.com/|||Thanks Jim,
I don't think this is the case but I'll double check.
Progress seems to halt before then, right after the initial fetch.
Simon
"Jim Underwood" wrote:
> Maybe one of the tables that you are updating is locked and the stored
> procedure is waiting for the lock to free up?
> "Si" <Si@.discussions.microsoft.com> wrote in message
> news:4A962893-F4D1-4BDA-814E-9DB0E891EA6F@.microsoft.com...
> seconds.
>
>|||So the 'after first fetch'
is never reached?
"Si" <Si@.discussions.microsoft.com> wrote in message
news:E15851B6-2879-453D-8F01-C96375BF7A1C@.microsoft.com...
> Thanks Jim,
> I don't think this is the case but I'll double check.
> Progress seems to halt before then, right after the initial fetch.
> Simon
>
> "Jim Underwood" wrote:
>
no
outrageous
statements|||Si wrote:
> The reason for the cursor was to extract the identity value from the
> instrument table to enter it into the external table. (Sorry, I didn't
> include table definitions)
> I'd be very happy if there isanother way to achieve this and get rid of th
e
> curse, I mean cursor!
>
At least three possible solutions that don't need a cursor.
1. Create an insert trigger on the Instrument table to populate
External.
2. Use a table variable or temp table (SQL Server 2000):
DECLARE @.instrument TABLE ...;
INSERT INTO @.instrument (...)
SELECT ...
FROM staging ...;
INSERT INTO instrument (...)
SELECT ...
FROM @.instrument;
INSERT INTO external
(instrumentid, ...)
SELECT I.instrumentid, ...
FROM instrument AS T
LEFT JOIN @.instrument AS I
ON ... etc
3. Use a table variable and INSERT ... OUTPUT (SQL Server 2005):
DECLARE @.instrument (instrumentid INTEGER);
INSERT INTO instrument (...)
OUTPUT Inserted.instrumentid INTO @.instrument
SELECT ...
FROM staging ...;
INSERT INTO external
(instrumentid, ...)
SELECT instrumentid, ...
FROM @.instrument;
Hope this helps.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||curse (n) - an unnecessary cursor
I like that. :) Maybe it should be added:
http://www.webster.com/dictionary/curse
ML
http://milambda.blogspot.com/|||You don't need a cursor for a select.. insert..
insert into TableA( a, b, c )
select a, b, c from TableB
Meanwhile, the next time it seems to freeze, use the procedure sp_who2 to
determine if the process is blocked.
"Si" <Si@.discussions.microsoft.com> wrote in message
news:717A26A2-8862-49D0-A78D-44879CD2FA3B@.microsoft.com...
> The reason for the cursor was to extract the identity value from the
> instrument table to enter it into the external table. (Sorry, I didn't
> include table definitions)
> I'd be very happy if there isanother way to achieve this and get rid of
> the
> curse, I mean cursor!
>
> There is nothing else going on overnight apart from backups.
> "ML" wrote:
>|||Some re-thinking on the design and cursor no longer needed.
Thanks very much everyone for your time and replies.
Simon.
"Si" wrote:
> I'm stumped on this.
> My developers proc left to run all night consumes tons of cpu but does no
> updates at all and I have to kill the connection in the morning.
> The select in the cursor declaration returns 357 rows so not an outrageous
> result set. When I run the select in Qry Analyser it runs in under 6 secon
ds.
> When I run the stored proc in QA it takes forever. Debug print statements
> appear up until the initial fetch and then nothing.
> Can a cursor loop indefinitely?
>
>
> CREATE PROCEDURE dbo.stc_Insert_Instrument AS
> set nocount on
> declare
> @.rc int -- returncode
> , @.errmsg varchar(250)
> , @.msg varchar(50)
>
> select @.errmsg = 'Error in proc ' + object_name(@.@.procid) + ': '
> -- create temp tables
> select
> <Snip>
> into #moodystaging
> from moodystaging
>
> select
> <Snip>
> into #spstaging
> from spstaging
>
> declare ins_cursor cursor read_only for
> select
> iss.IssuerId,
> st.SECURITY_DES,
> case isdate(st.maturity)
> when 1 then st.maturity
> else null
> end as maturity,
> case isdate(st.issue_dt)
> when 1 then st.issue_dt
> else null
> end as issue_dt,
> case isnumeric(st.cpn)
> when 1 then st.cpn
> else null
> end as cpn,
> cp.CouponTypeId,
> st.CRNCY,
> case isnumeric(st.AMT_ISSUED)
> when 1 then st.AMT_ISSUED
> else null
> end as amt_issued,
> case isnumeric(st.AMT_OUTSTANDING)
> when 1 then st.AMT_OUTSTANDING
> else null
> end as amt_outstanding,
> r.RatingId,
> sprt.RatingId as spRatingId,
> st.id_bb_unique,
> st.id_isin,
> st.id_cusip
> from
> staging st
> left JOIN Issuer iss on st.ID_BB_COMPANY = iss.SourceId
> INNER JOIN CouponType cp on st.CPN_TYP = cp.CouponTypeName
> left join external e on st.id_bb_unique = e.externalid and
> e.externaltype='BB'
> left join #moodystaging mt on st.ID_BB_UNIQUE = mt.ID_BB_UNIQUE
> left join rating r on mt.rtg = r.rating and r.type = 'Moody'
> left join #spstaging spt on st.ID_BB_UNIQUE = spt.ID_BB_UNIQUE
> left join rating sprt on spt.rtg = sprt.rating and sprt.type = 'Sp'
> where
> e.externalid is null
> and iss.sourceid is not null
>
> open ins_cursor
> declare
> @.issuerid int,
> @.instrument_name varchar(50),
> @.maturitydate datetime,
> @.issuedate datetime,
> @.coupon float(8),
> @.coupontypeid int,
> @.currency varchar(3),
> @.amountissued float(8),
> @.amountoutstanding float(8),
> @.moodyratingid int,
> @.spratingid int,
> @.bb_id varchar(50),
> @.isin varchar(50),
> @.cusip varchar(50),
> @.instrumentid int
> fetch next from ins_cursor into
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid,
> @.bb_id,
> @.isin,
> @.cusip
> print 'after first fetch'
> while @.@.fetch_status = 0
> begin
> print 'processing'
> insert into instrument (
> issuerid,
> instrumentname,
> maturitydate,
> issuedate,
> coupon,
> coupontypeid,
> currency,
> amountissued,
> amountoutstanding,
> moodyratingid,
> spratingid)
> values (
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into Instrument failed.'
> goto errhandler
> end
> set @.instrumentid = @.@.identity
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'BB', @.bb_id)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for BB type.'
> goto errhandler
> end
>
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'ISIN', @.isin)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for ISIN type.'
> goto errhandler
> end
> insert into external (instrumentid, externaltype, externalid) values
> (@.instrumentid, 'Cusip', @.cusip)
> -- check for errors
> select @.rc = @.@.error
> if @.rc <> 0
> begin
> select @.msg = 'Insert into External failed for Cusip type.'
> goto errhandler
> end
> fetch next from ins_cursor into
> @.issuerid,
> @.instrument_name,
> @.maturitydate,
> @.issuedate,
> @.coupon,
> @.coupontypeid,
> @.currency,
> @.amountissued,
> @.amountoutstanding,
> @.moodyratingid,
> @.spratingid,
> @.bb_id,
> @.isin,
> @.cusip
> end
> close ins_cursor
> deallocate ins_cursor
> drop table #moodystaging
> drop table #spstaging
> return 0 -- success
> errhandler:
> raiserror ('%s %s',16,1,@.errmsg,@.msg)
> if @.@.trancount > 0
> rollback transaction
> drop table #moodystaging
> drop table #spstaging
> return @.rc
> GO
>
Really need to use this query - but it has a sub-query in it...
I have the following query in my application and really need to run this as it will help my application in terms of performance and efficiency. I need to get the record that was last entered into the database, and the following seems to do it in SQL Server Desktop, but not in SQL Server Mobile (as it has a Sub-Query). At the moment I have a query that returns all rows (in order) that have a DateCompleted column entered and then just skimming the top row. So i'm infact bringing back hundreds of rows, just so that I can get the first record - completed waste resources if you ask me.
Does anyone know how I can tackle this issue?
SELECT *
FROM tbl_NSP_AnswerSet
WHERE (DateCompleted IS NOT NULL)
AND DateCompleted = (SELECT MAX(DateCompleted) FROM tbl_NSP_AnswerSet)
ORDER BY DateCompleted DES
Thanks
Tryst
Hi
Create a User Define function which will calculate for you the Max datecomleted for that table and then in the where clause compaire the datecompleted.
CREATE FUNCTION GetMaxDateCompleted()
RETURNS int
AS
BEGIN
(SELECT MAX(DateCompleted) FROM tbl_NSP_AnswerSet)
END
AND
SELECT *
FROM tbl_NSP_AnswerSet
WHERE (DateCompleted IS NOT NULL)
AND DateCompleted = dbo.GetMaxDateCompleted()
ORDER BY DateCompleted DES
Tryst
Saturday, February 25, 2012
real time reporting + OLTP dbs
systems. Right now we run nightly extracts and load into OLAP DBs to
minimise locking,etc while the extracts are being run. However our customers
now want to see real time data as soon as possible and we cannot give them
access to the live OLTP systems. Are there any ways to implement this
efficiently ?Have you tried replication? It should give you more frequent updates.
You might want to consider using read uncommitted query because replication
also needs to make changes to your OLAP database to keep data up to date.
--
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
> How does one implement real time reporting off of data from the OLTP
> systems. Right now we run nightly extracts and load into OLAP DBs to
> minimise locking,etc while the extracts are being run. However our
> customers
> now want to see real time data as soon as possible and we cannot give them
> access to the live OLTP systems. Are there any ways to implement this
> efficiently ?
>|||we thought about replication but cant really justify replicating each of our
OLTP dbs to serve the reporting needs besides the administration needed to
support it ,etc..
"wei xiao [MSFT]" <weix@.online.microsoft.com> wrote in message
news:Oc0vFSf0EHA.2568@.TK2MSFTNGP11.phx.gbl...
> Have you tried replication? It should give you more frequent updates.
> You might want to consider using read uncommitted query because
replication
> also needs to make changes to your OLAP database to keep data up to date.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Hassan" <fatima_ja@.hotmail.com> wrote in message
> news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
> > How does one implement real time reporting off of data from the OLTP
> > systems. Right now we run nightly extracts and load into OLAP DBs to
> > minimise locking,etc while the extracts are being run. However our
> > customers
> > now want to see real time data as soon as possible and we cannot give
them
> > access to the live OLTP systems. Are there any ways to implement this
> > efficiently ?
> >
> >
>
real time reporting + OLTP dbs
systems. Right now we run nightly extracts and load into OLAP DBs to
minimise locking,etc while the extracts are being run. However our customers
now want to see real time data as soon as possible and we cannot give them
access to the live OLTP systems. Are there any ways to implement this
efficiently ?
Have you tried replication? It should give you more frequent updates.
You might want to consider using read uncommitted query because replication
also needs to make changes to your OLAP database to keep data up to date.
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
> How does one implement real time reporting off of data from the OLTP
> systems. Right now we run nightly extracts and load into OLAP DBs to
> minimise locking,etc while the extracts are being run. However our
> customers
> now want to see real time data as soon as possible and we cannot give them
> access to the live OLTP systems. Are there any ways to implement this
> efficiently ?
>
|||we thought about replication but cant really justify replicating each of our
OLTP dbs to serve the reporting needs besides the administration needed to
support it ,etc..
"wei xiao [MSFT]" <weix@.online.microsoft.com> wrote in message
news:Oc0vFSf0EHA.2568@.TK2MSFTNGP11.phx.gbl...
> Have you tried replication? It should give you more frequent updates.
> You might want to consider using read uncommitted query because
replication
> also needs to make changes to your OLAP database to keep data up to date.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no
rights.[vbcol=seagreen]
> "Hassan" <fatima_ja@.hotmail.com> wrote in message
> news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
them
>
real time reporting + OLTP dbs
systems. Right now we run nightly extracts and load into OLAP DBs to
minimise locking,etc while the extracts are being run. However our customers
now want to see real time data as soon as possible and we cannot give them
access to the live OLTP systems. Are there any ways to implement this
efficiently ?Have you tried replication? It should give you more frequent updates.
You might want to consider using read uncommitted query because replication
also needs to make changes to your OLAP database to keep data up to date.
Wei Xiao [MSFT]
SQL Server Storage Engine Development
http://weblogs.asp.net/weix
This posting is provided "AS IS" with no warranties, and confers no rights.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
> How does one implement real time reporting off of data from the OLTP
> systems. Right now we run nightly extracts and load into OLAP DBs to
> minimise locking,etc while the extracts are being run. However our
> customers
> now want to see real time data as soon as possible and we cannot give them
> access to the live OLTP systems. Are there any ways to implement this
> efficiently ?
>|||we thought about replication but cant really justify replicating each of our
OLTP dbs to serve the reporting needs besides the administration needed to
support it ,etc..
"wei xiao [MSFT]" <weix@.online.microsoft.com> wrote in message
news:Oc0vFSf0EHA.2568@.TK2MSFTNGP11.phx.gbl...
> Have you tried replication? It should give you more frequent updates.
> You might want to consider using read uncommitted query because
replication
> also needs to make changes to your OLAP database to keep data up to date.
> --
> Wei Xiao [MSFT]
> SQL Server Storage Engine Development
> http://weblogs.asp.net/weix
> This posting is provided "AS IS" with no warranties, and confers no
rights.
> "Hassan" <fatima_ja@.hotmail.com> wrote in message
> news:upTfU1d0EHA.3584@.TK2MSFTNGP11.phx.gbl...
them[vbcol=seagreen]
>