Monday, February 20, 2012

Reads/Duration Higher on ADO vs. QA

Hello.
I have an odd scenario. I just swapped out database servers today for a
heavily-hit web server. Everything went smoothly for the most part, except
I'm having sporadic queries that are taking an excessive amount of time to
execute. Excessive, as in, before the swap, these queries would take < 2
seconds, and now they are timing out (> 30 seconds). Also, this is only
happening for a very small percentage of cases.
To make things more confusing, I had it happening to me consistently for
about 20 minutes, but when I ran the exact same stored proc in Query
Analyzer, it would run just fine.
So I try to trace it in Profiler. The results were very odd. As expected,
when hit from the website (ADO), my query was timing out (Duration ~30000),
while the trace results from QA were 1-2 seconds. But my reads from ADO
were HUGE -- like in the millions -- whereas the reads from QA were only in
the thousdands.
What in the world would cause my reads to get so high, just by hitting it
from ADO? As I said, it's the exact same stored procedure and parameters (I
actually copied the TextData from Profiler and pasted it into QA).
I'm assuming that the execution plan between the two is different, but I
have no idea how to capture the execution plan from my particular
problem-causing query from the web (without getting flooded w/ all the other
queries running). I was able to capture my QA execution plan in Profile by
filtering on SPID, but my SPID from ADO changes constantly.
Oh, by the way, after beating my head against the wall trying to figure this
out, my query from the web started behaving normally, so my troubleshooting
had to come to an end.
I'm very confused. Please, any suggestions would be appreciated.
Thanks.
JeradJerad
Capture an execution plan from the Profiler and compare it to the execution
plan that was generated by QA.
Can you indentify a stored procedure in the Profiler that caused to the
problem?
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
except
> I'm having sporadic queries that are taking an excessive amount of time to
> execute. Excessive, as in, before the swap, these queries would take < 2
> seconds, and now they are timing out (> 30 seconds). Also, this is only
> happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
expected,
> when hit from the website (ADO), my query was timing out (Duration
~30000),
> while the trace results from QA were 1-2 seconds. But my reads from ADO
> were HUGE -- like in the millions -- whereas the reads from QA were only
in
> the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
(I
> actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
other
> queries running). I was able to capture my QA execution plan in Profile
by
> filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
this
> out, my query from the web started behaving normally, so my
troubleshooting
> had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||Hi
You may want to look at SP:recompile events.
You may want to try scripting the trace output as a SQL script and running
that.
It could be that you are passing atypical parameter values, so read up on
parameter sniffing such as in Ken Henderson's "The Guru's Guide to SQL
Server Architecture and Internals" ISBN 0-201-70047-6
John
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
>
I agree that the problem is that your executions are getting a different
plan in QA and ADO.
Typically stored procedure invocations will use a saved query plan, but SQL
Client connections have a number of settings which can affect the results of
queries, and so query plans cannot be shared between clients with different
settings. This might account for why you got a different plan in QA than
from ADO.
First, How to reproduce the bad behavor in QA? Open up profiler and you
will see the connection settings for the existing ADO connections in the
Existing Connection event. Copy those out into QA and run them before you
run the query again. This should give you the best shot at reproducting the
query exactly.
Alternatively, you can use ADO. Just paste this into a file on the server
with a .VBS extension
RunCommand.vbs listing
--
sub RunCommand(sql)
dim con
set con = CreateObject("ADODB.Connection")
con.Open "Provider=sqloledb;" & _
"Data Source=(local);" & _
"Initial Catalog=master;" & _
"User ID=stan;Password=lee"
dim rst
set rst = con.Execute(sql)
dim rc
rc = 0
Do While not rst.eof
rc = rc + 1
rst.MoveNext
loop
msgbox "got " & rc & " rows"
end sub
Dim sql
sql = InputBox("Enter SQL")
RunCommand(sql)
--
General fixes include:
Update statistics. Out of date statistics are the number one cause of poor
query performance. Having "swapped out database servers" is a big red flag
here.
Supress Parameter sniffing in your stored procedure by not binding query
where-clause parameters directlly to input parameters. Instead bind the
query to a local variable set to the value of the input parameter.
Run the stored procedure WITH RECOMPILE, you will get better query plans,
for the cost of recompiling the procedure on every invocation.
David|||Are you using a client side or server side cursor from ADO? This can have a
huge impact on the number of reads and the duration, even where the same
execution plan is used as the Query Analyser. Why? Extra reads can come from
the cursoring operations & extra duration from the overall time lapsed to
cursor over the resultset.
If you want to confirm the runtime execution plan, you can inject a SQL
Trace script into the actual command to log execution plans for the current
SPID. This isn't code you'd want to leave permanently in production, just
something you'd add temporarily to get the precise plans during run-time. To
do this, you'd create the trace in profiler & use the File / Script Trace
menu to generate the script. You'd have to manually add the SPID filter via
a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
page or Win32 app, you might extract it to a stored proc first so that you
can add the trace codemore easily. Sure, this is jumping through a lot of
hoops (it regularly annoys me that this is so hard to do) but at least there
is a way to get the job done.
Regards,
Greg Linwood
SQL Server MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||How can you tell if its a client or server side cursor ? Whats the
difference between the two ?
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Are you using a client side or server side cursor from ADO? This can have
a
> huge impact on the number of reads and the duration, even where the same
> execution plan is used as the Query Analyser. Why? Extra reads can come
from
> the cursoring operations & extra duration from the overall time lapsed to
> cursor over the resultset.
> If you want to confirm the runtime execution plan, you can inject a SQL
> Trace script into the actual command to log execution plans for the
current
> SPID. This isn't code you'd want to leave permanently in production, just
> something you'd add temporarily to get the precise plans during run-time.
To
> do this, you'd create the trace in profiler & use the File / Script Trace
> menu to generate the script. You'd have to manually add the SPID filter
via
> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
> page or Win32 app, you might extract it to a stored proc first so that you
> can add the trace codemore easily. Sure, this is jumping through a lot of
> hoops (it regularly annoys me that this is so hard to do) but at least
there
> is a way to get the job done.
> Regards,
> Greg Linwood
> SQL Server MVP
> "Jerad Rose" <no@.spam.com> wrote in message
> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> > Hello.
> >
> > I have an odd scenario. I just swapped out database servers today for a
> > heavily-hit web server. Everything went smoothly for the most part,
> > except I'm having sporadic queries that are taking an excessive amount
of
> > time to execute. Excessive, as in, before the swap, these queries would
> > take < 2 seconds, and now they are timing out (> 30 seconds). Also,
this
> > is only happening for a very small percentage of cases.
> >
> > To make things more confusing, I had it happening to me consistently for
> > about 20 minutes, but when I ran the exact same stored proc in Query
> > Analyzer, it would run just fine.
> >
> > So I try to trace it in Profiler. The results were very odd. As
> > expected, when hit from the website (ADO), my query was timing out
> > (Duration ~30000), while the trace results from QA were 1-2 seconds.
But
> > my reads from ADO were HUGE -- like in the millions -- whereas the reads
> > from QA were only in the thousdands.
> >
> > What in the world would cause my reads to get so high, just by hitting
it
> > from ADO? As I said, it's the exact same stored procedure and
parameters
> > (I actually copied the TextData from Profiler and pasted it into QA).
> >
> > I'm assuming that the execution plan between the two is different, but I
> > have no idea how to capture the execution plan from my particular
> > problem-causing query from the web (without getting flooded w/ all the
> > other queries running). I was able to capture my QA execution plan in
> > Profile by filtering on SPID, but my SPID from ADO changes constantly.
> >
> > Oh, by the way, after beating my head against the wall trying to figure
> > this out, my query from the web started behaving normally, so my
> > troubleshooting had to come to an end.
> >
> > I'm very confused. Please, any suggestions would be appreciated.
> >
> > Thanks.
> > Jerad
> >
>|||A client cursor means that the client application downloads the entire
resultset before moving (eg rs.movenext or dr.read) over the rows. With a
server-side cursor, the client application brings dow rows one at a time
during rs.movenext or dr.read operations.
You can tell the difference by checking ADO's CursorLocation properties on
the resultset or command objects.
btw, queries executed over a client-side cursor can still have much longer
duration than the same queries executed directly on the server via Query
Analyser because the resultset needs to be sent back over the network to the
client application. If the application stalls the process of reading the
resultset for any reason (the cliient pc might be busy running multiple
applications for example), this reflects in the duration value..
HTH
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23eRt9AdhFHA.1252@.TK2MSFTNGP09.phx.gbl...
> How can you tell if its a client or server side cursor ? Whats the
> difference between the two ?
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
>> Are you using a client side or server side cursor from ADO? This can have
> a
>> huge impact on the number of reads and the duration, even where the same
>> execution plan is used as the Query Analyser. Why? Extra reads can come
> from
>> the cursoring operations & extra duration from the overall time lapsed to
>> cursor over the resultset.
>> If you want to confirm the runtime execution plan, you can inject a SQL
>> Trace script into the actual command to log execution plans for the
> current
>> SPID. This isn't code you'd want to leave permanently in production, just
>> something you'd add temporarily to get the precise plans during run-time.
> To
>> do this, you'd create the trace in profiler & use the File / Script Trace
>> menu to generate the script. You'd have to manually add the SPID filter
> via
>> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
>> page or Win32 app, you might extract it to a stored proc first so that
>> you
>> can add the trace codemore easily. Sure, this is jumping through a lot of
>> hoops (it regularly annoys me that this is so hard to do) but at least
> there
>> is a way to get the job done.
>> Regards,
>> Greg Linwood
>> SQL Server MVP
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
>> > Hello.
>> >
>> > I have an odd scenario. I just swapped out database servers today for
>> > a
>> > heavily-hit web server. Everything went smoothly for the most part,
>> > except I'm having sporadic queries that are taking an excessive amount
> of
>> > time to execute. Excessive, as in, before the swap, these queries
>> > would
>> > take < 2 seconds, and now they are timing out (> 30 seconds). Also,
> this
>> > is only happening for a very small percentage of cases.
>> >
>> > To make things more confusing, I had it happening to me consistently
>> > for
>> > about 20 minutes, but when I ran the exact same stored proc in Query
>> > Analyzer, it would run just fine.
>> >
>> > So I try to trace it in Profiler. The results were very odd. As
>> > expected, when hit from the website (ADO), my query was timing out
>> > (Duration ~30000), while the trace results from QA were 1-2 seconds.
> But
>> > my reads from ADO were HUGE -- like in the millions -- whereas the
>> > reads
>> > from QA were only in the thousdands.
>> >
>> > What in the world would cause my reads to get so high, just by hitting
> it
>> > from ADO? As I said, it's the exact same stored procedure and
> parameters
>> > (I actually copied the TextData from Profiler and pasted it into QA).
>> >
>> > I'm assuming that the execution plan between the two is different, but
>> > I
>> > have no idea how to capture the execution plan from my particular
>> > problem-causing query from the web (without getting flooded w/ all the
>> > other queries running). I was able to capture my QA execution plan in
>> > Profile by filtering on SPID, but my SPID from ADO changes constantly.
>> >
>> > Oh, by the way, after beating my head against the wall trying to figure
>> > this out, my query from the web started behaving normally, so my
>> > troubleshooting had to come to an end.
>> >
>> > I'm very confused. Please, any suggestions would be appreciated.
>> >
>> > Thanks.
>> > Jerad
>> >
>>
>|||Thanks so much for all the great replies.
It's late, so I have to make this quick.
. I was able to reproduce the slow execution through QA, so I don't think
it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in QA.
Sometimes it's slow in both (or one or the other). I think this may also
rule out client-side vs. server-side cursors.
. I did run sp_updatestats, but that didn't seem to help much
. I tried WITH RECOMPILE, but didn't seem to help much
. I managed to capture the execution plan for when it runs fast (< 2
seconds) vs. when it times out (> 30 seconds).
Here are the links to those (top lists stats):
http://www.animalcrossingcommunity.com/slowplan.txt
http://www.animalcrossingcommunity.com/fastplan.txt
Here is the actual stored proc that is run (this is actually a snippet --
some stuff runs afterwards, but I'm 99% this part is what is causing the
problems):
http://www.animalcrossingcommunity.com/query.txt
Thanks again for all your help.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Thanks so much for all the great replies.
> It's late, so I have to make this quick.
> . I was able to reproduce the slow execution through QA, so I don't think
> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
> QA. Sometimes it's slow in both (or one or the other). I think this may
> also rule out client-side vs. server-side cursors.
> . I did run sp_updatestats, but that didn't seem to help much
> . I tried WITH RECOMPILE, but didn't seem to help much
> . I managed to capture the execution plan for when it runs fast (< 2
> seconds) vs. when it times out (> 30 seconds).
> Here are the links to those (top lists stats):
> http://www.animalcrossingcommunity.com/slowplan.txt
> http://www.animalcrossingcommunity.com/fastplan.txt
> Here is the actual stored proc that is run (this is actually a snippet --
> some stuff runs afterwards, but I'm 99% this part is what is causing the
> problems):
> http://www.animalcrossingcommunity.com/query.txt
> Thanks again for all your help.
>
Good. Now post the table DDL with indexes and a an estimate of the number
of rows in each table.
This is a hard one, and the best plan in cases like this is often to look
beyond these two plans and see if changes to the indexing structure or query
formulation can result in a much cheaper (and therefore more stable) plan.
David
David|||Ok, here you go:
http://www.animalcrossingcommunity.com/ddl.txt
Here are some more clues. Before I swapped servers (basically same specs,
but new has Raid1 configuration and old just had one HD), this particular
query gave me virtually no problems. It typically ran in under a second,
seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
or more seconds -- at worst, 30+ seconds.
Thanks again for taking time to help. If there's any other info I can
provide to help diagnosis, let me know.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:urYuhAjhFHA.1048@.tk2msftngp13.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
>> Thanks so much for all the great replies.
>> It's late, so I have to make this quick.
>> . I was able to reproduce the slow execution through QA, so I don't think
>> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
>> QA. Sometimes it's slow in both (or one or the other). I think this may
>> also rule out client-side vs. server-side cursors.
>> . I did run sp_updatestats, but that didn't seem to help much
>> . I tried WITH RECOMPILE, but didn't seem to help much
>> . I managed to capture the execution plan for when it runs fast (< 2
>> seconds) vs. when it times out (> 30 seconds).
>> Here are the links to those (top lists stats):
>> http://www.animalcrossingcommunity.com/slowplan.txt
>> http://www.animalcrossingcommunity.com/fastplan.txt
>> Here is the actual stored proc that is run (this is actually a snippet --
>> some stuff runs afterwards, but I'm 99% this part is what is causing the
>> problems):
>> http://www.animalcrossingcommunity.com/query.txt
>> Thanks again for all your help.
> Good. Now post the table DDL with indexes and a an estimate of the number
> of rows in each table.
> This is a hard one, and the best plan in cases like this is often to look
> beyond these two plans and see if changes to the indexing structure or
> query formulation can result in a much cheaper (and therefore more stable)
> plan.
> David
> David
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
> Ok, here you go:
> http://www.animalcrossingcommunity.com/ddl.txt
> Here are some more clues. Before I swapped servers (basically same specs,
> but new has Raid1 configuration and old just had one HD), this particular
> query gave me virtually no problems. It typically ran in under a second,
> seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
> or more seconds -- at worst, 30+ seconds.
> Thanks again for taking time to help. If there's any other info I can
> provide to help diagnosis, let me know.
> Jerad
>
No smoking gun, but here's some things to try:
--ThreadID is the leading column in the clustered index. This index is
duplicative.
drop index ThreadUser.IX_ThreadUser
--Foregn keys should be indexed, and linking tables should be indexed
--for 2-way access
create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
--Try a compund index to support this search
create index IX_ThreadSearch on Thread(ThreadModifyDateTime
desc,ThreadIsDeleted,ThreadIsLocked)
--and modify the procedure to not suppress index usage on
ThreadModifyDateTime
--change
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
getdate())
--to
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
datediff(d,-1,getdate()))
David|||Wow.
Ok, so I have NO idea why I didn't already have UserID indexed on
PrivateThread. That could probably explain most of the issues -- and why I
was approaching reads in the millions. Good catch. The others made perfect
sense as well.
So, I've got all of your suggestions implmeneted, and so far it seems to
have made a big difference. So much thanks again for your time and help!
My only confusion now, is, why this didn't seem to be a problem on the old
server. Especially considering my lack of an index for UserID on
PrivateThread, I'm surprised this didn't give me trouble before.
I've been running a trace, and so far, reads have not exceeded 10,000 and
duration is less than a second except on a few rare occasions where it hits
a few seconds -- so that seems to have fixed my problem so far. Now I'm a
little concerned of how many other spots have inadequate indexing.
Thanks again David (and others) for your time and help. It has been much
appreciated.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
> No smoking gun, but here's some things to try:
> --ThreadID is the leading column in the clustered index. This index is
> duplicative.
> drop index ThreadUser.IX_ThreadUser
> --Foregn keys should be indexed, and linking tables should be indexed
> --for 2-way access
> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
> --Try a compund index to support this search
> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
> desc,ThreadIsDeleted,ThreadIsLocked)
> --and modify the procedure to not suppress index usage on
> ThreadModifyDateTime
> --change
> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
> getdate())
> --to
> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
> datediff(d,-1,getdate()))
> David
>|||>> I've been running a trace, and so far, reads have not exceeded 10,000 and
Is that for a single query? If so then you aren't done yet<g>. Unless you
are pulling back thousands of rows (which you should question why) you
should be able to do this in a few hundred reads at best with proper
indexing and WHERE clauses. There are exceptions but that many reads
indicates either a full or partial scan. You should find out if that is the
properly indexed as well.
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
> Wow.
> Ok, so I have NO idea why I didn't already have UserID indexed on
> PrivateThread. That could probably explain most of the issues -- and why
> I was approaching reads in the millions. Good catch. The others made
> perfect sense as well.
> So, I've got all of your suggestions implmeneted, and so far it seems to
> have made a big difference. So much thanks again for your time and help!
> My only confusion now, is, why this didn't seem to be a problem on the old
> server. Especially considering my lack of an index for UserID on
> PrivateThread, I'm surprised this didn't give me trouble before.
> I've been running a trace, and so far, reads have not exceeded 10,000 and
> duration is less than a second except on a few rare occasions where it
> hits a few seconds -- so that seems to have fixed my problem so far. Now
> I'm a little concerned of how many other spots have inadequate indexing.
> Thanks again David (and others) for your time and help. It has been much
> appreciated.
> Jerad
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>|||Hi Andrew.
Well, no, it's a few queries. I have one query that loads ThreadIDs into a
temp table, which I later hit for paging. For example, the first query (the
one we've been dealing with) pulls all threads for a particular user, and
stores the ID along w/ a sequential identity column into a temp table.
Then, I have a second query that joins the temp table with the rest of my
tables to return a set of records -- say, records 201-250 (page 5). So, the
first query could potentially return thousands of records.
I'm not going to say that I still don't have some inadequate indexing, but
I'm much happier with the results I have now than what I was getting before.
But feel free to offer any other suggestions, based on the info in the above
links. If you need additional info other than what has been posted, let me
know.
Thanks again.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eB3rR7jhFHA.1164@.TK2MSFTNGP10.phx.gbl...
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and
> Is that for a single query? If so then you aren't done yet<g>. Unless
> you are pulling back thousands of rows (which you should question why) you
> should be able to do this in a few hundred reads at best with proper
> indexing and WHERE clauses. There are exceptions but that many reads
> indicates either a full or partial scan. You should find out if that is
> the properly indexed as well.
>
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and why
>> I was approaching reads in the millions. Good catch. The others made
>> perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000 and
>> duration is less than a second except on a few rare occasions where it
>> hits a few seconds -- so that seems to have fixed my problem so far. Now
>> I'm a little concerned of how many other spots have inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been much
>> appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>|||Ok, another update.
Unfortunately, it looks like this only helped temporarily. It's almost as
if I perform some sort of "reset" (or maybe updating the statistics), and it
works for a while, but then my indexes get outdated and start requiring
several reads once again.
I just had one trace that hit over 11 million reads. This just doesn't make
sense when 30 minutes ago, this exact same query only had a couple thousand
reads.
Stiff confused (and frustrated).
Thanks again for any suggestions.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
> Wow.
> Ok, so I have NO idea why I didn't already have UserID indexed on
> PrivateThread. That could probably explain most of the issues -- and why
> I was approaching reads in the millions. Good catch. The others made
> perfect sense as well.
> So, I've got all of your suggestions implmeneted, and so far it seems to
> have made a big difference. So much thanks again for your time and help!
> My only confusion now, is, why this didn't seem to be a problem on the old
> server. Especially considering my lack of an index for UserID on
> PrivateThread, I'm surprised this didn't give me trouble before.
> I've been running a trace, and so far, reads have not exceeded 10,000 and
> duration is less than a second except on a few rare occasions where it
> hits a few seconds -- so that seems to have fixed my problem so far. Now
> I'm a little concerned of how many other spots have inadequate indexing.
> Thanks again David (and others) for your time and help. It has been much
> appreciated.
> Jerad
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>|||The number of rows affected is one of the criteria for the use of an index
as a scan or a seek. If your query is not reusing a query plan it will
generate a new one each time you call it. It uses the values you supply to
determine what the plan should be. If this time the number of rows was too
much for a seek it might choose to do a scan. Some of the values for that
query will be higher than others in the number of rows returned. I haven't
spent a lot of time looking at your query and DDL but that join is fairly
complex and has several LEFT JOINS in it that probably don't help. Which
table is doing the most reads? You might want to be sure there is a
clustered index on the column that you are doing the join or range scan on.
--
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
> Ok, another update.
> Unfortunately, it looks like this only helped temporarily. It's almost as
> if I perform some sort of "reset" (or maybe updating the statistics), and
> it works for a while, but then my indexes get outdated and start requiring
> several reads once again.
> I just had one trace that hit over 11 million reads. This just doesn't
> make sense when 30 minutes ago, this exact same query only had a couple
> thousand reads.
> Stiff confused (and frustrated).
> Thanks again for any suggestions.
> Jerad
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and why
>> I was approaching reads in the millions. Good catch. The others made
>> perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000 and
>> duration is less than a second except on a few rare occasions where it
>> hits a few seconds -- so that seems to have fixed my problem so far. Now
>> I'm a little concerned of how many other spots have inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been much
>> appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>|||I didn't provide the entire stored proc, as I didn't want to muddy the
waters with too much information. But as I think about it, it may have to
do with the fact that I have several sets of "first queries", that are ran
based on the parameters (namely BoardID). Maybe that's causing the
execution plan to be inneficient.
Here is the link to the entire proc, and you'll see what I'm talking about:
http://www.animalcrossingcommunity.com/proc.txt
I did this to prevent from having to have redundant code in many procs, but
I will much rather have redundant code if it meant better performance.
Take a look at that and let me know if you think I'm on the right track.
Thanks again.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
> The number of rows affected is one of the criteria for the use of an index
> as a scan or a seek. If your query is not reusing a query plan it will
> generate a new one each time you call it. It uses the values you supply to
> determine what the plan should be. If this time the number of rows was
> too much for a seek it might choose to do a scan. Some of the values for
> that query will be higher than others in the number of rows returned. I
> haven't spent a lot of time looking at your query and DDL but that join is
> fairly complex and has several LEFT JOINS in it that probably don't help.
> Which table is doing the most reads? You might want to be sure there is a
> clustered index on the column that you are doing the join or range scan
> on.
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the statistics),
>> and it works for a while, but then my indexes get outdated and start
>> requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions where
>> it hits a few seconds -- so that seems to have fixed my problem so far.
>> Now I'm a little concerned of how many other spots have inadequate
>> indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>I didn't provide the entire stored proc, as I didn't want to muddy the
>waters with too much information. But as I think about it, it may have to
>do with the fact that I have several sets of "first queries", that are ran
>based on the parameters (namely BoardID). Maybe that's causing the
>execution plan to be inneficient.
> Here is the link to the entire proc, and you'll see what I'm talking
> about:
> http://www.animalcrossingcommunity.com/proc.txt
> I did this to prevent from having to have redundant code in many procs,
> but I will much rather have redundant code if it meant better performance.
> Take a look at that and let me know if you think I'm on the right track.
> Thanks again.
> Jerad
>
Ok, I would further break down the problem query by @.BoardID since the
query logic differs in the two cases. That way SQL doesn't have to use the
same plan for both cases.
else if @.BoardID = 63
insert into @.TempTable(RowID)
select Thread.ThreadID
from Thread (nolock)
join Users (nolock)
on Users.UserID = Thread.UserID
join UserType
on UserType.UserTypeID = @.UserTypeID
left join (
select ThreadID
, min(UserTypeRank) MinUserTypeRank
from PrivateThreadUser (nolock)
join Users (nolock)
on Users.UserID = PrivateThreadUser.UserID
join UserType
on UserType.UserTypeID = Users.UserTypeID
group by ThreadID
) PrivateThreadUserMinRank
on PrivateThreadUserMinRank.ThreadID = Thread.ThreadID
left join PrivateThreadUser (nolock)
on PrivateThreadUser.ThreadID = Thread.ThreadID
and PrivateThreadUser.UserID = @.UserID
left join ThreadUser (nolock)
on ThreadUser.ThreadID = Thread.ThreadID
and ThreadUser.UserID = @.UserID
where
Thread.BoardID = 63
and Thread.ThreadIsDeleted = 0
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
dateadd(d,-1,getdate()))
and (ThreadUser.ThreadIsDeleted is null or ThreadUser.ThreadIsDeleted = 0 )
order by
Thread.ThreadIsSticky desc
, Thread.ThreadLastPostDateTime desc
else if @.BoardID = 53
insert into @.TempTable(RowID)
select Thread.ThreadID
from Thread (nolock)
join Users (nolock)
on Users.UserID = Thread.UserID
join UserType
on UserType.UserTypeID = @.UserTypeID
left join (
select ThreadID
, min(UserTypeRank) MinUserTypeRank
from PrivateThreadUser (nolock)
join Users (nolock)
on Users.UserID = PrivateThreadUser.UserID
join UserType
on UserType.UserTypeID = Users.UserTypeID
group by ThreadID
) PrivateThreadUserMinRank
on PrivateThreadUserMinRank.ThreadID = Thread.ThreadID
left join PrivateThreadUser (nolock)
on PrivateThreadUser.ThreadID = Thread.ThreadID
and PrivateThreadUser.UserID = @.UserID
left join ThreadUser (nolock)
on ThreadUser.ThreadID = Thread.ThreadID
and ThreadUser.UserID = @.UserID
where
Thread.BoardID = 36
and Thread.ThreadIsDeleted = 0
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
dateadd(d,-1,getdate()))
and (PrivateThreadUser.ThreadID is not null or @.BoardID <> 36)
and (PrivateThreadUserMinRank.MinUserTypeRank > UserType.UserTypeRank)
order by
Thread.ThreadIsSticky desc
, Thread.ThreadLastPostDateTime desc
David|||Wow, Well I don't know what to tell you from here. There is no way for me
to tell how efficient these queries are without actually being connected to
your system. Each one of those IF blocks can certainly have a different
query plan and it gets quite convoluted when all lumped together in one sp.
You might consider breaking them into individual sp's and calling the
appropriate one to begin with. Do you really need to do that whole join
twice? Can't you grab what info is relevant the first time and avoid some
of the joins in the second pass?
--
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>I didn't provide the entire stored proc, as I didn't want to muddy the
>waters with too much information. But as I think about it, it may have to
>do with the fact that I have several sets of "first queries", that are ran
>based on the parameters (namely BoardID). Maybe that's causing the
>execution plan to be inneficient.
> Here is the link to the entire proc, and you'll see what I'm talking
> about:
> http://www.animalcrossingcommunity.com/proc.txt
> I did this to prevent from having to have redundant code in many procs,
> but I will much rather have redundant code if it meant better performance.
> Take a look at that and let me know if you think I'm on the right track.
> Thanks again.
> Jerad
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
>> The number of rows affected is one of the criteria for the use of an
>> index as a scan or a seek. If your query is not reusing a query plan it
>> will generate a new one each time you call it. It uses the values you
>> supply to determine what the plan should be. If this time the number of
>> rows was too much for a seek it might choose to do a scan. Some of the
>> values for that query will be higher than others in the number of rows
>> returned. I haven't spent a lot of time looking at your query and DDL
>> but that join is fairly complex and has several LEFT JOINS in it that
>> probably don't help. Which table is doing the most reads? You might want
>> to be sure there is a clustered index on the column that you are doing
>> the join or range scan on.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the statistics),
>> and it works for a while, but then my indexes get outdated and start
>> requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems
>> to have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions where
>> it hits a few seconds -- so that seems to have fixed my problem so far.
>> Now I'm a little concerned of how many other spots have inadequate
>> indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I
>> can provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>>
>|||Hey guys.
I went ahead and split that proc into multiple procs, so that there were no
IF blocks. This seems to have helped a great deal, as this proc is no
longer taking more than a second or two (worst case). It's been running
like this for about a day, so hopefully this was the answer. I'm still
getting a few thousand reads, but I'm not sure I'll be able to get it down
any less than that.
Thanks again for all your help. This has been a good learning experience
for me, as I now feel better equipped to diagnose problem queries and
indexing issues such as this.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ODOgF4lhFHA.3436@.tk2msftngp13.phx.gbl...
> Wow, Well I don't know what to tell you from here. There is no way for me
> to tell how efficient these queries are without actually being connected
> to your system. Each one of those IF blocks can certainly have a
> different query plan and it gets quite convoluted when all lumped together
> in one sp. You might consider breaking them into individual sp's and
> calling the appropriate one to begin with. Do you really need to do that
> whole join twice? Can't you grab what info is relevant the first time and
> avoid some of the joins in the second pass?
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>>I didn't provide the entire stored proc, as I didn't want to muddy the
>>waters with too much information. But as I think about it, it may have to
>>do with the fact that I have several sets of "first queries", that are ran
>>based on the parameters (namely BoardID). Maybe that's causing the
>>execution plan to be inneficient.
>> Here is the link to the entire proc, and you'll see what I'm talking
>> about:
>> http://www.animalcrossingcommunity.com/proc.txt
>> I did this to prevent from having to have redundant code in many procs,
>> but I will much rather have redundant code if it meant better
>> performance.
>> Take a look at that and let me know if you think I'm on the right track.
>> Thanks again.
>> Jerad
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
>> The number of rows affected is one of the criteria for the use of an
>> index as a scan or a seek. If your query is not reusing a query plan it
>> will generate a new one each time you call it. It uses the values you
>> supply to determine what the plan should be. If this time the number of
>> rows was too much for a seek it might choose to do a scan. Some of the
>> values for that query will be higher than others in the number of rows
>> returned. I haven't spent a lot of time looking at your query and DDL
>> but that join is fairly complex and has several LEFT JOINS in it that
>> probably don't help. Which table is doing the most reads? You might
>> want to be sure there is a clustered index on the column that you are
>> doing the join or range scan on.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the
>> statistics), and it works for a while, but then my indexes get outdated
>> and start requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems
>> to have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions
>> where it hits a few seconds -- so that seems to have fixed my problem
>> so far. Now I'm a little concerned of how many other spots have
>> inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>>> Ok, here you go:
>>>
>>> http://www.animalcrossingcommunity.com/ddl.txt
>>>
>>> Here are some more clues. Before I swapped servers (basically same
>>> specs, but new has Raid1 configuration and old just had one HD),
>>> this particular query gave me virtually no problems. It typically
>>> ran in under a second, seemingly averaging about 0.5 seconds. Now,
>>> even at best, I'm getting 1-2 or more seconds -- at worst, 30+
>>> seconds.
>>>
>>> Thanks again for taking time to help. If there's any other info I
>>> can provide to help diagnosis, let me know.
>>>
>>> Jerad
>>>
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index
>> is duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>>
>>
>

No comments:

Post a Comment