Showing posts with label web. Show all posts
Showing posts with label web. Show all posts

Monday, March 26, 2012

Receipt with copies

I need some URGENT help!!! I developed a .Net Web App. I used the Crystal Report system included with VB .Net. to make a report that prints the clients receipt, but I need that same information to be repeated three times in the same page (original and 2 copies). The paper (8.5" x 11") is perforated in thirds. Ive tried to repeat the same info three times in the Crystal Reports document, without success... Ive tried subreports, etc... Any suggestions? Pleeeeeeeease help...
Thanks...Which database are you using?
You can have stored procedure having the query

Select query
Union All
Select query
Union All
Select query

and design the report using that stored procedure

Friday, March 9, 2012

Reasonable alternative to Dynamic SQL?

I have a web form that contains 7 search boxes. Any 2 can be used to search,
but only 2 out of the 7.
Since I dont know which ones will be used for search criteria, I figure
using Dynamic SQL will be the best solution and add WHERE clauses as needed.
However, the query is somewhat intense, and I worry about execution time.
Is there an alternative to this that I dont know about?SQL Server MVP Erland Sommarskog has written an excellent article on dynamic
searches and one of his conclusions is that dynamic SQL can be the right
choice from a performance poitn of view in some circumstances. you can find
his article here:
http://www.algonet.se/~sommar/dyn-search.html
--
Jacco Schalkwijk MCDBA, MCSD, MCSE
Database Administrator
Eurostop Ltd.
"Learning SQL Server" <new@.bie.com> wrote in message
news:OHvm6aQVDHA.2200@.TK2MSFTNGP09.phx.gbl...
> I have a web form that contains 7 search boxes. Any 2 can be used to
search,
> but only 2 out of the 7.
> Since I dont know which ones will be used for search criteria, I figure
> using Dynamic SQL will be the best solution and add WHERE clauses as
needed.
> However, the query is somewhat intense, and I worry about execution time.
> Is there an alternative to this that I dont know about?
>
>|||Jacco:
Excellent is an understatement. Thank you for this great resource.
"Jacco Schalkwijk" <NOSPAMjaccos@.eurostop.co.uk> wrote in message
news:O0PRU0QVDHA.2068@.TK2MSFTNGP11.phx.gbl...
> SQL Server MVP Erland Sommarskog has written an excellent article on
dynamic
> searches and one of his conclusions is that dynamic SQL can be the right
> choice from a performance poitn of view in some circumstances. you can
find
> his article here:
> http://www.algonet.se/~sommar/dyn-search.html
> --
> Jacco Schalkwijk MCDBA, MCSD, MCSE
> Database Administrator
> Eurostop Ltd.
>
> "Learning SQL Server" <new@.bie.com> wrote in message
> news:OHvm6aQVDHA.2200@.TK2MSFTNGP09.phx.gbl...
> >
> > I have a web form that contains 7 search boxes. Any 2 can be used to
> search,
> > but only 2 out of the 7.
> >
> > Since I dont know which ones will be used for search criteria, I figure
> > using Dynamic SQL will be the best solution and add WHERE clauses as
> needed.
> > However, the query is somewhat intense, and I worry about execution
time.
> >
> > Is there an alternative to this that I dont know about?
> >
> >
> >
>

Wednesday, March 7, 2012

Realtional Tables

For starters, please forgive my newbie'ness as SQL is not my strongest area in any sense.

I am trying to create a web application which consists of a DDL which is databound and based on the selection in the 1st DDL, a gridview should be generated and populated with the criteria I'm looking for.

My trouble starts with the database. I have 2 tables that I'm working with - users and messages.

The users table consists of 2 columns; userid and useruri. The messages table consists of 2 tables; userid, body.

The first drop down list is populated from a database call of SELECT useruri, userid FROM users. The output of this DDL is simply each user's name. This works pefectly.

To build the gridview or output (because i'm not sure if gridview is the right word), the second table, messages, needs to be queried but I can't figure out how to take the userid from the users table and match it to the userid of the messages table, based on the selection in the DDL of the useruri, to build the view.

Does that make sense? Any help would be greatly appreciated.

Thanks,

Casey

You could use SqlDataSource controls and do it something like this:
User:<asp:dropdownlist id="ddlUsers" runat="server" autopostback="True" datasourceid="sdsUsers"datatextfield="Name" datavaluefield="Id" /><asp:sqldatasource id="sdsUsers" runat="server"connectionstring="<%$ ConnectionStrings:ed_testConnectionString%>" selectcommand="SELECT * FROM [User]"></asp:sqldatasource>Message:<asp:dropdownlist id="ddlMessages" runat="server" datasourceid="sdsMessages" datatextfield="Message"datavaluefield="msgId" /><asp:sqldatasource id="sdsMessages" runat="server" connectionstring="<%$ ConnectionStrings:ed_testConnectionString%>"selectcommand="SELECT [msgId], [Message] FROM [Message] WHERE ([Id] = @.Id)"><selectparameters><asp:controlparameter controlid="ddlUsers" name="Id" propertyname="SelectedValue"type="Int32" /></selectparameters></asp:sqldatasource>

Really urgent - Data field retrieve and store

Basically i make use of FormView in Web Developer Express Edition to create and store information to the database ...

what i want to do further is to capture the value within the primary key of the newly created recordset that I set it become auto increament ....

this value would be stored within a session string in global condition that allowed any page able to access it ....

If you simply want to return the ID to use within the application, and you are using a stored procedure, do this.

Insert the record as normal in the table, then add a statement such as SELECT SCOPE_IDENETITY() as TheNewId

If your insert fails, this line never executes. If insert is OK, you get teh new ID passed back

In your code, perform an execute scaler method call on the connection, and cast the return (decimal as object) into an integer, and store in session. If the returned value is null, the insert failed

What I often create for projects is a global "Execute Scaler" method that accepts the name of a stored procedure, and a string dictionary of name/value pairs.

IN the method, create the SQL Command object, call the SQLCommandBuilder static method to get the parameters, then iterate through the dictionary to add these by name.

Call the command ExecuteScaler as mentioned and proceed as above

|||Really appreciate ur feedback men... but it looks so technical for me men... i am totally a new beginner but have to complete this project within another week time...

can u demonstrate the code (sorry for the hassle)... i kind of not sure how to achieve what u had mentioned...|||

Example SP

CREATE PROCEDURE

dbo.MyTable_Insert

(

@.Parameter1varchar(100)

)

AS

SET NOCOUNT ON

INSERT INTOMyTable (MyData)VALUES(@.Parameter1)

SELECT SCOPE_IDENTITY()asTheNewId

Example Code

int InsertRecord(string insertValue)

{

// assumes you have set cnn as a suitable connection

SqlCommand cmd =newSqlCommand("MyTable_Insert", cnn);

cmd.CommandType = System.Data.CommandType.StoredProcedure;

cmd.Parameters["@.Parameter1"].Value = insertValue;

object retObject = cmd.ExecuteScalar();

if (retObject !=null)

return (int)retObject;

else

return 0;// no new record

}

Really need to solve this SQL problem to finish this web site.

Hi,
i created a dataSet in an ASP.Net page which:
1. Loads all fields from each database record.
2. Creates a new field using 2 of the existing fields: FullName = FirstName
+ ' ' + LastName
3. Loads only the record which has the same FullName as the variable Name
passed in the URL to this page.
My SQL is this:
SELECT *, FirstName + ' ' + LastName as FullName
FROM people
WHERE FirstName + ' ' + LastName = ?
and:
<Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name") <>
Nothing), Request.QueryString("Name"), "") %>'
Type="WChar"/></Parameters></MM:DataSet>
Can you please tell me how to do this?
I know i allready asked this but until now i got no solution and i need to
solve this to finish this web site and deliver it.
I hope i was able to explain better my problem.
Thank You Very Much,
Miguel
Why are you putting the Request.Querystring in a pre-render block, should it
not be in regular code with the rest of it ? IE
Value = Request.QueryString("Name")
OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> Hi,
> i created a dataSet in an ASP.Net page which:
> 1. Loads all fields from each database record.
> 2. Creates a new field using 2 of the existing fields: FullName =
FirstName
> + ' ' + LastName
> 3. Loads only the record which has the same FullName as the variable Name
> passed in the URL to this page.
> My SQL is this:
> SELECT *, FirstName + ' ' + LastName as FullName
> FROM people
> WHERE FirstName + ' ' + LastName = ?
> and:
> <Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name")
<>
> Nothing), Request.QueryString("Name"), "") %>'
> Type="WChar"/></Parameters></MM:DataSet>
> Can you please tell me how to do this?
> I know i allready asked this but until now i got no solution and i need to
> solve this to finish this web site and deliver it.
> I hope i was able to explain better my problem.
> Thank You Very Much,
> Miguel
>
>
|||Do you know the strange thing? I tested it and everything works fine...but
when i was testing in the Dreamweaver window where i place the SQL, i was
not getting any records...have no idea why is this...but well, at least it's
working.
Thanks for eveybody help and patiente...my code is working :-)
Thanks again for your time,
Miguel
"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:OtAHCsXVEHA.3596@.tk2msftngp13.phx.gbl...
> Why are you putting the Request.Querystring in a pre-render block, should
it[vbcol=seagreen]
> not be in regular code with the rest of it ? IE
> Value = Request.QueryString("Name")
> --
> OHM ( Terry Burns )
> . . . One-Handed-Man . . .
>
> "Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
> news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> FirstName
Name[vbcol=seagreen]
IIf((Request.QueryString("Name")[vbcol=seagreen]
> <>
to
>

Really need to solve this SQL problem to finish this web site.

Hi,
i created a dataSet in an ASP.Net page which:
1. Loads all fields from each database record.
2. Creates a new field using 2 of the existing fields: FullName = FirstName
+ ' ' + LastName
3. Loads only the record which has the same FullName as the variable Name
passed in the URL to this page.
My SQL is this:
SELECT *, FirstName + ' ' + LastName as FullName
FROM people
WHERE FirstName + ' ' + LastName = ?
and:
<Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name") <>
Nothing), Request.QueryString("Name"), "") %>'
Type="WChar"/></Parameters></MM:DataSet>
Can you please tell me how to do this?
I know i allready asked this but until now i got no solution and i need to
solve this to finish this web site and deliver it.
I hope i was able to explain better my problem.
Thank You Very Much,
MiguelWhy are you putting the Request.Querystring in a pre-render block, should it
not be in regular code with the rest of it ? IE
Value = Request.QueryString("Name")
OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> Hi,
> i created a dataSet in an ASP.Net page which:
> 1. Loads all fields from each database record.
> 2. Creates a new field using 2 of the existing fields: FullName =
FirstName
> + ' ' + LastName
> 3. Loads only the record which has the same FullName as the variable Name
> passed in the URL to this page.
> My SQL is this:
> SELECT *, FirstName + ' ' + LastName as FullName
> FROM people
> WHERE FirstName + ' ' + LastName = ?
> and:
> <Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name")
<>
> Nothing), Request.QueryString("Name"), "") %>'
> Type="WChar"/></Parameters></MM:DataSet>
> Can you please tell me how to do this?
> I know i allready asked this but until now i got no solution and i need to
> solve this to finish this web site and deliver it.
> I hope i was able to explain better my problem.
> Thank You Very Much,
> Miguel
>
>|||Do you know the strange thing? I tested it and everything works fine...but
when i was testing in the Dreamweaver window where i place the SQL, i was
not getting any records...have no idea why is this...but well, at least it's
working.
Thanks for eveybody help and patiente...my code is working :-)
Thanks again for your time,
Miguel
"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:OtAHCsXVEHA.3596@.tk2msftngp13.phx.gbl...
> Why are you putting the Request.Querystring in a pre-render block, should
it
> not be in regular code with the rest of it ? IE
> Value = Request.QueryString("Name")
> --
> OHM ( Terry Burns )
> . . . One-Handed-Man . . .
>
> "Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
> news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> FirstName
Name[vbcol=seagreen]
IIf((Request.QueryString("Name")[vbcol=seagreen]
> <>
to[vbcol=seagreen]
>

Really need to solve this SQL problem to finish this web site.

Hi,
i created a dataSet in an ASP.Net page which:
1. Loads all fields from each database record.
2. Creates a new field using 2 of the existing fields: FullName = FirstName
+ ' ' + LastName
3. Loads only the record which has the same FullName as the variable Name
passed in the URL to this page.
My SQL is this:
SELECT *, FirstName + ' ' + LastName as FullName
FROM people
WHERE FirstName + ' ' + LastName = ?
and:
<Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name") <>
Nothing), Request.QueryString("Name"), "") %>'
Type="WChar"/></Parameters></MM:DataSet>
Can you please tell me how to do this?
I know i allready asked this but until now i got no solution and i need to
solve this to finish this web site and deliver it.
I hope i was able to explain better my problem.
Thank You Very Much,
MiguelWhy are you putting the Request.Querystring in a pre-render block, should it
not be in regular code with the rest of it ? IE
Value = Request.QueryString("Name")
--
OHM ( Terry Burns )
. . . One-Handed-Man . . .
"Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> Hi,
> i created a dataSet in an ASP.Net page which:
> 1. Loads all fields from each database record.
> 2. Creates a new field using 2 of the existing fields: FullName =FirstName
> + ' ' + LastName
> 3. Loads only the record which has the same FullName as the variable Name
> passed in the URL to this page.
> My SQL is this:
> SELECT *, FirstName + ' ' + LastName as FullName
> FROM people
> WHERE FirstName + ' ' + LastName = ?
> and:
> <Parameter Name="@.FullName" Value='<%# IIf((Request.QueryString("Name")
<>
> Nothing), Request.QueryString("Name"), "") %>'
> Type="WChar"/></Parameters></MM:DataSet>
> Can you please tell me how to do this?
> I know i allready asked this but until now i got no solution and i need to
> solve this to finish this web site and deliver it.
> I hope i was able to explain better my problem.
> Thank You Very Much,
> Miguel
>
>|||Do you know the strange thing? I tested it and everything works fine...but
when i was testing in the Dreamweaver window where i place the SQL, i was
not getting any records...have no idea why is this...but well, at least it's
working.
Thanks for eveybody help and patiente...my code is working :-)
Thanks again for your time,
Miguel
"One Handed Man ( OHM - Terry Burns )" <news.microsoft.com> wrote in message
news:OtAHCsXVEHA.3596@.tk2msftngp13.phx.gbl...
> Why are you putting the Request.Querystring in a pre-render block, should
it
> not be in regular code with the rest of it ? IE
> Value = Request.QueryString("Name")
> --
> OHM ( Terry Burns )
> . . . One-Handed-Man . . .
>
> "Miguel Dias Moura" <web001@.27NOSPAMlamps.com> wrote in message
> news:%23UCksgXVEHA.2592@.TK2MSFTNGP09.phx.gbl...
> > Hi,
> >
> > i created a dataSet in an ASP.Net page which:
> >
> > 1. Loads all fields from each database record.
> > 2. Creates a new field using 2 of the existing fields: FullName => FirstName
> > + ' ' + LastName
> > 3. Loads only the record which has the same FullName as the variable
Name
> > passed in the URL to this page.
> >
> > My SQL is this:
> >
> > SELECT *, FirstName + ' ' + LastName as FullName
> > FROM people
> > WHERE FirstName + ' ' + LastName = ?
> >
> > and:
> >
> > <Parameter Name="@.FullName" Value='<%#
IIf((Request.QueryString("Name")
> <>
> > Nothing), Request.QueryString("Name"), "") %>'
> > Type="WChar"/></Parameters></MM:DataSet>
> >
> > Can you please tell me how to do this?
> > I know i allready asked this but until now i got no solution and i need
to
> > solve this to finish this web site and deliver it.
> >
> > I hope i was able to explain better my problem.
> >
> > Thank You Very Much,
> > Miguel
> >
> >
> >
>

really large stored procedures

i'm not much of a sql developer, most of my exposure to it comes from
needing to get data for my web applications.
i've been kinda wondering, is it abnormal to have stored procs that are
over 500 lines long?
the business requirements seem to warrant such a long procedure and it
doesn't seem to have any absurd inefficiencies, but at the same time is
it very unusual?
in the object oriented world, this would be considered horrible style,
yet i can't really find any guidelines on the topic, so i thought i'd
ask.
TIAnot necessarily - in sql, typically it's the size of the statements, e.g.
select name from sysobjects where xtype='U'
and
select
name
from
sysobjects
where
xtype = 'U'
are the same statement, but one is 1 line and the other is 6 lines.
however, many statements often indicates that you've got a lot of
procedural code and not much relational (set-based) code.
in the oo world, you have no choice - e.g., if you want to update all
rows in a resultset to have the same StatusCode, you have to loop
through it and set it.
but in sql, you wouldn't need a cursor to do this -- you would just need
a single update statement, e.g.
update mytable
set status = 'P'
where <conditions>
so, the [well, my] answer is -- line count is not intrinsically
indicative of a bad proc. bad code is.
vortep@.gmail.com wrote:
> i'm not much of a sql developer, most of my exposure to it comes from
> needing to get data for my web applications.
> i've been kinda wondering, is it abnormal to have stored procs that are
> over 500 lines long?
> the business requirements seem to warrant such a long procedure and it
> doesn't seem to have any absurd inefficiencies, but at the same time is
> it very unusual?
> in the object oriented world, this would be considered horrible style,
> yet i can't really find any guidelines on the topic, so i thought i'd
> ask.
> TIA
>|||T-SQL is not object oriented, so 500 or more is very common. When I write a
SQL statement, I use a style that breaks each selected column onto a
seperate line (and I typically select a lot of columns when developing
reports or fact tables for OLAP cubes), so my procedures and views tend to
be 100s or even 1000s of lines becuase of this.
<vortep@.gmail.com> wrote in message
news:1136407727.493541.163990@.g44g2000cwa.googlegroups.com...
> i'm not much of a sql developer, most of my exposure to it comes from
> needing to get data for my web applications.
> i've been kinda wondering, is it abnormal to have stored procs that are
> over 500 lines long?
> the business requirements seem to warrant such a long procedure and it
> doesn't seem to have any absurd inefficiencies, but at the same time is
> it very unusual?
> in the object oriented world, this would be considered horrible style,
> yet i can't really find any guidelines on the topic, so i thought i'd
> ask.
> TIA
>|||vortep@.gmail.com wrote:
> i'm not much of a sql developer, most of my exposure to it comes from
> needing to get data for my web applications.
> i've been kinda wondering, is it abnormal to have stored procs that
> are over 500 lines long?
> the business requirements seem to warrant such a long procedure and it
> doesn't seem to have any absurd inefficiencies, but at the same time
> is it very unusual?
> in the object oriented world, this would be considered horrible style,
> yet i can't really find any guidelines on the topic, so i thought i'd
> ask.
> TIA
No. 500 lines is fine as long as you are not experiencing Recompiles or
allowing extended locks on tables during production hours. If you see
recompiles (SQL 2000), break out the code causing the recompile or
eliminate it if possible. SQL 2005 implements statement recompilation,
which eliminates most of the recompile overhead. In any case, recompiles
hurt performance, so eliminate them if possible. You can check for them
using Profiler and the SP:Recompile event (SQL 2000) and the
SQL:StmtRecompile for SQL 2005.
David Gugick
Quest Software
www.quest.com|||(vortep@.gmail.com) writes:
> i'm not much of a sql developer, most of my exposure to it comes from
> needing to get data for my web applications.
> i've been kinda wondering, is it abnormal to have stored procs that are
> over 500 lines long?
> the business requirements seem to warrant such a long procedure and it
> doesn't seem to have any absurd inefficiencies, but at the same time is
> it very unusual?
> in the object oriented world, this would be considered horrible style,
> yet i can't really find any guidelines on the topic, so i thought i'd
> ask.
A database is not an object-oriented world.
A 500-line stored procedure is not a small baby, but we have quite a
few of that size in our database. I think our longest is around 3000
lines.
Partly this is due to the fact that T-SQL is not well aimed of breaking
things into pieces. You can write procedures and pass parameters, but
these are scalar, and in SQL your rather work with tables. You can share
tables between stored procedures, but it is a bit kludgy.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Coming from an OO background I also wondered about the efficiencies of
large stored procs when transitioning to SQL Server development (as you
I had some exposure to it but from a front-end / middle-tier
perspective only).
I think that if you want to evaluate your stored proc and maybe
consider rewriting them you should definitly read up on set-based vs.
procedural. There's tons of info in this newsgroup for you to lap up in
any case.
regards,
Gerard|||From my own point of view is very common to have this kind of sp in a
production environment.
"Gerard" wrote:

> Coming from an OO background I also wondered about the efficiencies of
> large stored procs when transitioning to SQL Server development (as you
> I had some exposure to it but from a front-end / middle-tier
> perspective only).
> I think that if you want to evaluate your stored proc and maybe
> consider rewriting them you should definitly read up on set-based vs.
> procedural. There's tons of info in this newsgroup for you to lap up in
> any case.
> regards,
> Gerard
>|||Not at all abnormal. Bear in mind that in SQL, unlike in other
languages, there can be two reasons for a stored procedure being long:
a) Large number of statements
b) Large number of lines - long statements running over multiple lines.
Long statements are more common in SQL than in other languages due to
the sheer complexity of single SELECT statements, which may include a
long list of columns, references to multiple tables joined together,
and complex WHERE clauses. A long statement doesn't necessary mean a
long processing time, as long as joins are well-designed and tables are
indexed. Just switched windows to a stored procedure containing
multiple 80-line SQL statements - processing time is under a second.
The things to watch for for efficiency's sake are the things that SQL
Server is slow at: I've found that DELETE operations, and looping
through cursors are the worst for this.
Seb|||thanks guys,
i definately use one line for every "word" and don't use cursors unless
absolutely necessary
general format i use is
select
blah1,
blah2,
blah3
from
tablesmth
inner join
someothertable
on
somefield
=someotherfield
where
stuff
=stuff2
and
stuff3
=stuff4|||Line breaks are a renewable resource, so use them whereever you feel they
are useful.
<vortep@.gmail.com> wrote in message
news:1136488641.467676.117480@.z14g2000cwz.googlegroups.com...
> thanks guys,
> i definately use one line for every "word" and don't use cursors unless
> absolutely necessary
> general format i use is
> select
> blah1,
> blah2,
> blah3
> from
> tablesmth
> inner join
> someothertable
> on
> somefield
> =someotherfield
> where
> stuff
> =stuff2
> and
> stuff3
> =stuff4
>

really isolated

Hi
i read microsoft.com
that "web servers do not require netbios or smb" (sic)
then MS recomends to disable smb by this way:
1) "Clear the Client Microsoft Networks box"
2) "Clear the File and Printer Sharing for Microsoft Networks box"
by this way, web server stays really isolated from the intranet
is this a good practice?
because, by this way
i cant move development files to webserver
and cant do replication
thanks
(this is the link
http://msdn.microsoft.com/library/d...
d89.asp)
atte,
Hernn Castelo
UTN Buenos Aires
. . . . . . . . . . . . . . . . . . . . . . . . .
.The article dicusses;
"Disable all unnecessary protocols, including NetBIOS and SMB. Web servers
do not require NetBIOS or SMB on their Internet-facing network interface
cards (NICs). Disable these protocols to counter the threat of host
enumeration."
So, this is assuming that the web server has multiple nics installed, and
that one is internet facing. I think what's more important is to limit
the number of ports open on the box to 80 and/or 443.
Thanks,
Kevin McDonnell
Microsoft Corporation
This posting is provided AS IS with no warranties, and confers no rights.|||hmm...i see
ok thanks for the reply
atte,
Hernn Castelo
UTN Buenos Aires
. . . . . . . . . . . . . . . . . . . . . . . . .
.
"Kevin McDonnell [MSFT]" <kevmc@.online.microsoft.com> escribi en el mensaje news:rHLgU
nZHEHA.660@.cpmsftngxa06.phx.gbl...
> The article dicusses;
> "Disable all unnecessary protocols, including NetBIOS and SMB. Web servers
> do not require NetBIOS or SMB on their Internet-facing network interface
> cards (NICs). Disable these protocols to counter the threat of host
> enumeration."
> So, this is assuming that the web server has multiple nics installed, and
> that one is internet facing. I think what's more important is to limit
> the number of ports open on the box to 80 and/or 443.
>
> Thanks,
> Kevin McDonnell
> Microsoft Corporation
> This posting is provided AS IS with no warranties, and confers no rights.
>
>

Monday, February 20, 2012

Real simple liscensing question for sql server 2005.

Hell, I host web pages on a server(s) that I own, located in a datacenter.

I am trying to figure out how to properly liscnese sql server 2005.

I have called microsoft, and the people i talk to dont seem to understand my situation. They keep asking me about the number of employess i have, which is totally irrelevant.

Let me give you a basic example.

Example 1.

My personal web page has a blog on it. The blog data is stored in a SQL database. Its a popular blog, thousands of anonymous people reading my blog every month via my webpage. The only "thing" accessing the SQL databse is an asp.net script i write, which then turns around and presents the data via html over the www. just like every other blog in the universe.

Example 2.

I sell artwork over the internet via my web page. My web page uses a shopping cart system which makes use of an SQL database to keep track of inventory and orders etc. No one ever tries to connect to my database, they just use the shopping cart on my web page which of course connects to the database.

Now according to the MS liscensing documentation it seems that i can buy sql server w/ 5 device CAL's. Since I only have 1 or 2 webservers accessing the database server, that should be no problem right?

So my basic question is this: is a single Device CAL enough to allow one webserver to connect to one sql server and then show dynamic content to thousands of anonymous users (whoever happens to visit my sites?).

Thank you very much for any help you can provide.

viscious:

I have called microsoft, and the people i talk to dont seem to understand my situation.


You really need to try again to work with Microsoft on licensing, as this is a legal issue and they should be the best source of information on the licensing of their product.

ready-made sample script to display a search reslut from a database

Hi all,

Can anyone give me a ready-made sample script to display a search reslut from a database?

My intention is to give a dynamic web page for the search in the BookStore website.

The result will show the following:-

First the picture of the book.

To the right of it, 'Title' field, below that, 'description' field

It should be created programatically as the contents are in the database.

I tried several ways, but could not succeed. I have the BookID, Title, BookPicture, and the Description fields

Thanking you in advance,

Tomy

Hi Tomy,

The Microsoft PetShop 4.0 for .NET framework 2.0 will be a good place to start.

You can download from the following link:

http://msdn2.microsoft.com/en-us/library/aa479071.aspx

If anything is unclear, please feel free to let me know.

Reads/Duration Higher on ADO vs. QA

Hello.
I have an odd scenario. I just swapped out database servers today for a
heavily-hit web server. Everything went smoothly for the most part, except
I'm having sporadic queries that are taking an excessive amount of time to
execute. Excessive, as in, before the swap, these queries would take < 2
seconds, and now they are timing out (> 30 seconds). Also, this is only
happening for a very small percentage of cases.
To make things more confusing, I had it happening to me consistently for
about 20 minutes, but when I ran the exact same stored proc in Query
Analyzer, it would run just fine.
So I try to trace it in Profiler. The results were very odd. As expected,
when hit from the website (ADO), my query was timing out (Duration ~30000),
while the trace results from QA were 1-2 seconds. But my reads from ADO
were HUGE -- like in the millions -- whereas the reads from QA were only in
the thousdands.
What in the world would cause my reads to get so high, just by hitting it
from ADO? As I said, it's the exact same stored procedure and parameters (I
actually copied the TextData from Profiler and pasted it into QA).
I'm assuming that the execution plan between the two is different, but I
have no idea how to capture the execution plan from my particular
problem-causing query from the web (without getting flooded w/ all the other
queries running). I was able to capture my QA execution plan in Profile by
filtering on SPID, but my SPID from ADO changes constantly.
Oh, by the way, after beating my head against the wall trying to figure this
out, my query from the web started behaving normally, so my troubleshooting
had to come to an end.
I'm very confused. Please, any suggestions would be appreciated.
Thanks.
Jerad
Jerad
Capture an execution plan from the Profiler and compare it to the execution
plan that was generated by QA.
Can you indentify a stored procedure in the Profiler that caused to the
problem?
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
except
> I'm having sporadic queries that are taking an excessive amount of time to
> execute. Excessive, as in, before the swap, these queries would take < 2
> seconds, and now they are timing out (> 30 seconds). Also, this is only
> happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
expected,
> when hit from the website (ADO), my query was timing out (Duration
~30000),
> while the trace results from QA were 1-2 seconds. But my reads from ADO
> were HUGE -- like in the millions -- whereas the reads from QA were only
in
> the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
(I
> actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
other
> queries running). I was able to capture my QA execution plan in Profile
by
> filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
this
> out, my query from the web started behaving normally, so my
troubleshooting
> had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>
|||Hi
You may want to look at SP:recompile events.
You may want to try scripting the trace output as a SQL script and running
that.
It could be that you are passing atypical parameter values, so read up on
parameter sniffing such as in Ken Henderson's "The Guru's Guide to SQL
Server Architecture and Internals" ISBN 0-201-70047-6
John
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>
|||"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
>
I agree that the problem is that your executions are getting a different
plan in QA and ADO.
Typically stored procedure invocations will use a saved query plan, but SQL
Client connections have a number of settings which can affect the results of
queries, and so query plans cannot be shared between clients with different
settings. This might account for why you got a different plan in QA than
from ADO.
First, How to reproduce the bad behavor in QA? Open up profiler and you
will see the connection settings for the existing ADO connections in the
Existing Connection event. Copy those out into QA and run them before you
run the query again. This should give you the best shot at reproducting the
query exactly.
Alternatively, you can use ADO. Just paste this into a file on the server
with a .VBS extension
RunCommand.vbs listing
sub RunCommand(sql)
dim con
set con = CreateObject("ADODB.Connection")
con.Open "Provider=sqloledb;" & _
"Data Source=(local);" & _
"Initial Catalog=master;" & _
"User ID=stan;Password=lee"
dim rst
set rst = con.Execute(sql)
dim rc
rc = 0
Do While not rst.eof
rc = rc + 1
rst.MoveNext
loop
msgbox "got " & rc & " rows"
end sub
Dim sql
sql = InputBox("Enter SQL")
RunCommand(sql)
General fixes include:
Update statistics. Out of date statistics are the number one cause of poor
query performance. Having "swapped out database servers" is a big red flag
here.
Supress Parameter sniffing in your stored procedure by not binding query
where-clause parameters directlly to input parameters. Instead bind the
query to a local variable set to the value of the input parameter.
Run the stored procedure WITH RECOMPILE, you will get better query plans,
for the cost of recompiling the procedure on every invocation.
David
|||Are you using a client side or server side cursor from ADO? This can have a
huge impact on the number of reads and the duration, even where the same
execution plan is used as the Query Analyser. Why? Extra reads can come from
the cursoring operations & extra duration from the overall time lapsed to
cursor over the resultset.
If you want to confirm the runtime execution plan, you can inject a SQL
Trace script into the actual command to log execution plans for the current
SPID. This isn't code you'd want to leave permanently in production, just
something you'd add temporarily to get the precise plans during run-time. To
do this, you'd create the trace in profiler & use the File / Script Trace
menu to generate the script. You'd have to manually add the SPID filter via
a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
page or Win32 app, you might extract it to a stored proc first so that you
can add the trace codemore easily. Sure, this is jumping through a lot of
hoops (it regularly annoys me that this is so hard to do) but at least there
is a way to get the job done.
Regards,
Greg Linwood
SQL Server MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>
|||How can you tell if its a client or server side cursor ? Whats the
difference between the two ?
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Are you using a client side or server side cursor from ADO? This can have
a
> huge impact on the number of reads and the duration, even where the same
> execution plan is used as the Query Analyser. Why? Extra reads can come
from
> the cursoring operations & extra duration from the overall time lapsed to
> cursor over the resultset.
> If you want to confirm the runtime execution plan, you can inject a SQL
> Trace script into the actual command to log execution plans for the
current
> SPID. This isn't code you'd want to leave permanently in production, just
> something you'd add temporarily to get the precise plans during run-time.
To
> do this, you'd create the trace in profiler & use the File / Script Trace
> menu to generate the script. You'd have to manually add the SPID filter
via
> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
> page or Win32 app, you might extract it to a stored proc first so that you
> can add the trace codemore easily. Sure, this is jumping through a lot of
> hoops (it regularly annoys me that this is so hard to do) but at least
there[vbcol=seagreen]
> is a way to get the job done.
> Regards,
> Greg Linwood
> SQL Server MVP
> "Jerad Rose" <no@.spam.com> wrote in message
> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
of[vbcol=seagreen]
this[vbcol=seagreen]
But[vbcol=seagreen]
it[vbcol=seagreen]
parameters
>
|||A client cursor means that the client application downloads the entire
resultset before moving (eg rs.movenext or dr.read) over the rows. With a
server-side cursor, the client application brings dow rows one at a time
during rs.movenext or dr.read operations.
You can tell the difference by checking ADO's CursorLocation properties on
the resultset or command objects.
btw, queries executed over a client-side cursor can still have much longer
duration than the same queries executed directly on the server via Query
Analyser because the resultset needs to be sent back over the network to the
client application. If the application stalls the process of reading the
resultset for any reason (the cliient pc might be busy running multiple
applications for example), this reflects in the duration value..
HTH
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23eRt9AdhFHA.1252@.TK2MSFTNGP09.phx.gbl...
> How can you tell if its a client or server side cursor ? Whats the
> difference between the two ?
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> a
> from
> current
> To
> via
> there
> of
> this
> But
> it
> parameters
>
|||Thanks so much for all the great replies.
It's late, so I have to make this quick.
.. I was able to reproduce the slow execution through QA, so I don't think
it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in QA.
Sometimes it's slow in both (or one or the other). I think this may also
rule out client-side vs. server-side cursors.
.. I did run sp_updatestats, but that didn't seem to help much
.. I tried WITH RECOMPILE, but didn't seem to help much
.. I managed to capture the execution plan for when it runs fast (< 2
seconds) vs. when it times out (> 30 seconds).
Here are the links to those (top lists stats):
http://www.animalcrossingcommunity.com/slowplan.txt
http://www.animalcrossingcommunity.com/fastplan.txt
Here is the actual stored proc that is run (this is actually a snippet --
some stuff runs afterwards, but I'm 99% this part is what is causing the
problems):
http://www.animalcrossingcommunity.com/query.txt
Thanks again for all your help.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>
|||"Jerad Rose" <no@.spam.com> wrote in message
news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Thanks so much for all the great replies.
> It's late, so I have to make this quick.
> . I was able to reproduce the slow execution through QA, so I don't think
> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
> QA. Sometimes it's slow in both (or one or the other). I think this may
> also rule out client-side vs. server-side cursors.
> . I did run sp_updatestats, but that didn't seem to help much
> . I tried WITH RECOMPILE, but didn't seem to help much
> . I managed to capture the execution plan for when it runs fast (< 2
> seconds) vs. when it times out (> 30 seconds).
> Here are the links to those (top lists stats):
> http://www.animalcrossingcommunity.com/slowplan.txt
> http://www.animalcrossingcommunity.com/fastplan.txt
> Here is the actual stored proc that is run (this is actually a snippet --
> some stuff runs afterwards, but I'm 99% this part is what is causing the
> problems):
> http://www.animalcrossingcommunity.com/query.txt
> Thanks again for all your help.
>
Good. Now post the table DDL with indexes and a an estimate of the number
of rows in each table.
This is a hard one, and the best plan in cases like this is often to look
beyond these two plans and see if changes to the indexing structure or query
formulation can result in a much cheaper (and therefore more stable) plan.
David
David
|||Ok, here you go:
http://www.animalcrossingcommunity.com/ddl.txt
Here are some more clues. Before I swapped servers (basically same specs,
but new has Raid1 configuration and old just had one HD), this particular
query gave me virtually no problems. It typically ran in under a second,
seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
or more seconds -- at worst, 30+ seconds.
Thanks again for taking time to help. If there's any other info I can
provide to help diagnosis, let me know.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:urYuhAjhFHA.1048@.tk2msftngp13.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Good. Now post the table DDL with indexes and a an estimate of the number
> of rows in each table.
> This is a hard one, and the best plan in cases like this is often to look
> beyond these two plans and see if changes to the indexing structure or
> query formulation can result in a much cheaper (and therefore more stable)
> plan.
> David
> David
>

Reads/Duration Higher on ADO vs. QA

Hello.
I have an odd scenario. I just swapped out database servers today for a
heavily-hit web server. Everything went smoothly for the most part, except
I'm having sporadic queries that are taking an excessive amount of time to
execute. Excessive, as in, before the swap, these queries would take < 2
seconds, and now they are timing out (> 30 seconds). Also, this is only
happening for a very small percentage of cases.
To make things more confusing, I had it happening to me consistently for
about 20 minutes, but when I ran the exact same stored proc in Query
Analyzer, it would run just fine.
So I try to trace it in Profiler. The results were very odd. As expected,
when hit from the website (ADO), my query was timing out (Duration ~30000),
while the trace results from QA were 1-2 seconds. But my reads from ADO
were HUGE -- like in the millions -- whereas the reads from QA were only in
the thousdands.
What in the world would cause my reads to get so high, just by hitting it
from ADO? As I said, it's the exact same stored procedure and parameters (I
actually copied the TextData from Profiler and pasted it into QA).
I'm assuming that the execution plan between the two is different, but I
have no idea how to capture the execution plan from my particular
problem-causing query from the web (without getting flooded w/ all the other
queries running). I was able to capture my QA execution plan in Profile by
filtering on SPID, but my SPID from ADO changes constantly.
Oh, by the way, after beating my head against the wall trying to figure this
out, my query from the web started behaving normally, so my troubleshooting
had to come to an end.
I'm very confused. Please, any suggestions would be appreciated.
Thanks.
JeradJerad
Capture an execution plan from the Profiler and compare it to the execution
plan that was generated by QA.
Can you indentify a stored procedure in the Profiler that caused to the
problem?
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
except
> I'm having sporadic queries that are taking an excessive amount of time to
> execute. Excessive, as in, before the swap, these queries would take < 2
> seconds, and now they are timing out (> 30 seconds). Also, this is only
> happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
expected,
> when hit from the website (ADO), my query was timing out (Duration
~30000),
> while the trace results from QA were 1-2 seconds. But my reads from ADO
> were HUGE -- like in the millions -- whereas the reads from QA were only
in
> the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
(I
> actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
other
> queries running). I was able to capture my QA execution plan in Profile
by
> filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
this
> out, my query from the web started behaving normally, so my
troubleshooting
> had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||Hi
You may want to look at SP:recompile events.
You may want to try scripting the trace output as a SQL script and running
that.
It could be that you are passing atypical parameter values, so read up on
parameter sniffing such as in Ken Henderson's "The Guru's Guide to SQL
Server Architecture and Internals" ISBN 0-201-70047-6
John
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
>
I agree that the problem is that your executions are getting a different
plan in QA and ADO.
Typically stored procedure invocations will use a saved query plan, but SQL
Client connections have a number of settings which can affect the results of
queries, and so query plans cannot be shared between clients with different
settings. This might account for why you got a different plan in QA than
from ADO.
First, How to reproduce the bad behavor in QA? Open up profiler and you
will see the connection settings for the existing ADO connections in the
Existing Connection event. Copy those out into QA and run them before you
run the query again. This should give you the best shot at reproducting the
query exactly.
Alternatively, you can use ADO. Just paste this into a file on the server
with a .VBS extension
RunCommand.vbs listing
--
sub RunCommand(sql)
dim con
set con = CreateObject("ADODB.Connection")
con.Open "Provider=sqloledb;" & _
"Data Source=(local);" & _
"Initial Catalog=master;" & _
"User ID=stan;Password=lee"
dim rst
set rst = con.Execute(sql)
dim rc
rc = 0
Do While not rst.eof
rc = rc + 1
rst.MoveNext
loop
msgbox "got " & rc & " rows"
end sub
Dim sql
sql = InputBox("Enter SQL")
RunCommand(sql)
--
General fixes include:
Update statistics. Out of date statistics are the number one cause of poor
query performance. Having "swapped out database servers" is a big red flag
here.
Supress Parameter sniffing in your stored procedure by not binding query
where-clause parameters directlly to input parameters. Instead bind the
query to a local variable set to the value of the input parameter.
Run the stored procedure WITH RECOMPILE, you will get better query plans,
for the cost of recompiling the procedure on every invocation.
David|||Are you using a client side or server side cursor from ADO? This can have a
huge impact on the number of reads and the duration, even where the same
execution plan is used as the Query Analyser. Why? Extra reads can come from
the cursoring operations & extra duration from the overall time lapsed to
cursor over the resultset.
If you want to confirm the runtime execution plan, you can inject a SQL
Trace script into the actual command to log execution plans for the current
SPID. This isn't code you'd want to leave permanently in production, just
something you'd add temporarily to get the precise plans during run-time. To
do this, you'd create the trace in profiler & use the File / Script Trace
menu to generate the script. You'd have to manually add the SPID filter via
a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
page or Win32 app, you might extract it to a stored proc first so that you
can add the trace codemore easily. Sure, this is jumping through a lot of
hoops (it regularly annoys me that this is so hard to do) but at least there
is a way to get the job done.
Regards,
Greg Linwood
SQL Server MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||How can you tell if its a client or server side cursor ? Whats the
difference between the two ?
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Are you using a client side or server side cursor from ADO? This can have
a
> huge impact on the number of reads and the duration, even where the same
> execution plan is used as the Query Analyser. Why? Extra reads can come
from
> the cursoring operations & extra duration from the overall time lapsed to
> cursor over the resultset.
> If you want to confirm the runtime execution plan, you can inject a SQL
> Trace script into the actual command to log execution plans for the
current
> SPID. This isn't code you'd want to leave permanently in production, just
> something you'd add temporarily to get the precise plans during run-time.
To
> do this, you'd create the trace in profiler & use the File / Script Trace
> menu to generate the script. You'd have to manually add the SPID filter
via
> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
> page or Win32 app, you might extract it to a stored proc first so that you
> can add the trace codemore easily. Sure, this is jumping through a lot of
> hoops (it regularly annoys me that this is so hard to do) but at least
there
> is a way to get the job done.
> Regards,
> Greg Linwood
> SQL Server MVP
> "Jerad Rose" <no@.spam.com> wrote in message
> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
of[vbcol=seagreen]
this[vbcol=seagreen]
But[vbcol=seagreen]
it[vbcol=seagreen]
parameters[vbcol=seagreen]
>|||A client cursor means that the client application downloads the entire
resultset before moving (eg rs.movenext or dr.read) over the rows. With a
server-side cursor, the client application brings dow rows one at a time
during rs.movenext or dr.read operations.
You can tell the difference by checking ADO's CursorLocation properties on
the resultset or command objects.
btw, queries executed over a client-side cursor can still have much longer
duration than the same queries executed directly on the server via Query
Analyser because the resultset needs to be sent back over the network to the
client application. If the application stalls the process of reading the
resultset for any reason (the cliient pc might be busy running multiple
applications for example), this reflects in the duration value..
HTH
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23eRt9AdhFHA.1252@.TK2MSFTNGP09.phx.gbl...
> How can you tell if its a client or server side cursor ? Whats the
> difference between the two ?
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> a
> from
> current
> To
> via
> there
> of
> this
> But
> it
> parameters
>|||Thanks so much for all the great replies.
It's late, so I have to make this quick.
. I was able to reproduce the slow execution through QA, so I don't think
it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in QA.
Sometimes it's slow in both (or one or the other). I think this may also
rule out client-side vs. server-side cursors.
. I did run sp_updatestats, but that didn't seem to help much
. I tried WITH RECOMPILE, but didn't seem to help much
. I managed to capture the execution plan for when it runs fast (< 2
seconds) vs. when it times out (> 30 seconds).
Here are the links to those (top lists stats):
http://www.animalcrossingcommunity.com/slowplan.txt
http://www.animalcrossingcommunity.com/fastplan.txt
Here is the actual stored proc that is run (this is actually a snippet --
some stuff runs afterwards, but I'm 99% this part is what is causing the
problems):
http://www.animalcrossingcommunity.com/query.txt
Thanks again for all your help.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Thanks so much for all the great replies.
> It's late, so I have to make this quick.
> . I was able to reproduce the slow execution through QA, so I don't think
> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
> QA. Sometimes it's slow in both (or one or the other). I think this may
> also rule out client-side vs. server-side cursors.
> . I did run sp_updatestats, but that didn't seem to help much
> . I tried WITH RECOMPILE, but didn't seem to help much
> . I managed to capture the execution plan for when it runs fast (< 2
> seconds) vs. when it times out (> 30 seconds).
> Here are the links to those (top lists stats):
> http://www.animalcrossingcommunity.com/slowplan.txt
> http://www.animalcrossingcommunity.com/fastplan.txt
> Here is the actual stored proc that is run (this is actually a snippet --
> some stuff runs afterwards, but I'm 99% this part is what is causing the
> problems):
> http://www.animalcrossingcommunity.com/query.txt
> Thanks again for all your help.
>
Good. Now post the table DDL with indexes and a an estimate of the number
of rows in each table.
This is a hard one, and the best plan in cases like this is often to look
beyond these two plans and see if changes to the indexing structure or query
formulation can result in a much cheaper (and therefore more stable) plan.
David
David|||Ok, here you go:
http://www.animalcrossingcommunity.com/ddl.txt
Here are some more clues. Before I swapped servers (basically same specs,
but new has Raid1 configuration and old just had one HD), this particular
query gave me virtually no problems. It typically ran in under a second,
seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
or more seconds -- at worst, 30+ seconds.
Thanks again for taking time to help. If there's any other info I can
provide to help diagnosis, let me know.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:urYuhAjhFHA.1048@.tk2msftngp13.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Good. Now post the table DDL with indexes and a an estimate of the number
> of rows in each table.
> This is a hard one, and the best plan in cases like this is often to look
> beyond these two plans and see if changes to the indexing structure or
> query formulation can result in a much cheaper (and therefore more stable)
> plan.
> David
> David
>

Reads/Duration Higher on ADO vs. QA

Hello.
I have an odd scenario. I just swapped out database servers today for a
heavily-hit web server. Everything went smoothly for the most part, except
I'm having sporadic queries that are taking an excessive amount of time to
execute. Excessive, as in, before the swap, these queries would take < 2
seconds, and now they are timing out (> 30 seconds). Also, this is only
happening for a very small percentage of cases.
To make things more confusing, I had it happening to me consistently for
about 20 minutes, but when I ran the exact same stored proc in Query
Analyzer, it would run just fine.
So I try to trace it in Profiler. The results were very odd. As expected,
when hit from the website (ADO), my query was timing out (Duration ~30000),
while the trace results from QA were 1-2 seconds. But my reads from ADO
were HUGE -- like in the millions -- whereas the reads from QA were only in
the thousdands.
What in the world would cause my reads to get so high, just by hitting it
from ADO? As I said, it's the exact same stored procedure and parameters (I
actually copied the TextData from Profiler and pasted it into QA).
I'm assuming that the execution plan between the two is different, but I
have no idea how to capture the execution plan from my particular
problem-causing query from the web (without getting flooded w/ all the other
queries running). I was able to capture my QA execution plan in Profile by
filtering on SPID, but my SPID from ADO changes constantly.
Oh, by the way, after beating my head against the wall trying to figure this
out, my query from the web started behaving normally, so my troubleshooting
had to come to an end.
I'm very confused. Please, any suggestions would be appreciated.
Thanks.
JeradJerad
Capture an execution plan from the Profiler and compare it to the execution
plan that was generated by QA.
Can you indentify a stored procedure in the Profiler that caused to the
problem?
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
except
> I'm having sporadic queries that are taking an excessive amount of time to
> execute. Excessive, as in, before the swap, these queries would take < 2
> seconds, and now they are timing out (> 30 seconds). Also, this is only
> happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
expected,
> when hit from the website (ADO), my query was timing out (Duration
~30000),
> while the trace results from QA were 1-2 seconds. But my reads from ADO
> were HUGE -- like in the millions -- whereas the reads from QA were only
in
> the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
(I
> actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
other
> queries running). I was able to capture my QA execution plan in Profile
by
> filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
this
> out, my query from the web started behaving normally, so my
troubleshooting
> had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||Hi
You may want to look at SP:recompile events.
You may want to try scripting the trace output as a SQL script and running
that.
It could be that you are passing atypical parameter values, so read up on
parameter sniffing such as in Ken Henderson's "The Guru's Guide to SQL
Server Architecture and Internals" ISBN 0-201-70047-6
John
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
>
I agree that the problem is that your executions are getting a different
plan in QA and ADO.
Typically stored procedure invocations will use a saved query plan, but SQL
Client connections have a number of settings which can affect the results of
queries, and so query plans cannot be shared between clients with different
settings. This might account for why you got a different plan in QA than
from ADO.
First, How to reproduce the bad behavor in QA? Open up profiler and you
will see the connection settings for the existing ADO connections in the
Existing Connection event. Copy those out into QA and run them before you
run the query again. This should give you the best shot at reproducting the
query exactly.
Alternatively, you can use ADO. Just paste this into a file on the server
with a .VBS extension
RunCommand.vbs listing
--
sub RunCommand(sql)
dim con
set con = CreateObject("ADODB.Connection")
con.Open "Provider=sqloledb;" & _
"Data Source=(local);" & _
"Initial Catalog=master;" & _
"User ID=stan;Password=lee"
dim rst
set rst = con.Execute(sql)
dim rc
rc = 0
Do While not rst.eof
rc = rc + 1
rst.MoveNext
loop
msgbox "got " & rc & " rows"
end sub
Dim sql
sql = InputBox("Enter SQL")
RunCommand(sql)
--
General fixes include:
Update statistics. Out of date statistics are the number one cause of poor
query performance. Having "swapped out database servers" is a big red flag
here.
Supress Parameter sniffing in your stored procedure by not binding query
where-clause parameters directlly to input parameters. Instead bind the
query to a local variable set to the value of the input parameter.
Run the stored procedure WITH RECOMPILE, you will get better query plans,
for the cost of recompiling the procedure on every invocation.
David|||Are you using a client side or server side cursor from ADO? This can have a
huge impact on the number of reads and the duration, even where the same
execution plan is used as the Query Analyser. Why? Extra reads can come from
the cursoring operations & extra duration from the overall time lapsed to
cursor over the resultset.
If you want to confirm the runtime execution plan, you can inject a SQL
Trace script into the actual command to log execution plans for the current
SPID. This isn't code you'd want to leave permanently in production, just
something you'd add temporarily to get the precise plans during run-time. To
do this, you'd create the trace in profiler & use the File / Script Trace
menu to generate the script. You'd have to manually add the SPID filter via
a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
page or Win32 app, you might extract it to a stored proc first so that you
can add the trace codemore easily. Sure, this is jumping through a lot of
hoops (it regularly annoys me that this is so hard to do) but at least there
is a way to get the job done.
Regards,
Greg Linwood
SQL Server MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||How can you tell if its a client or server side cursor ? Whats the
difference between the two ?
"Greg Linwood" <g_linwoodQhotmail.com> wrote in message
news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
> Are you using a client side or server side cursor from ADO? This can have
a
> huge impact on the number of reads and the duration, even where the same
> execution plan is used as the Query Analyser. Why? Extra reads can come
from
> the cursoring operations & extra duration from the overall time lapsed to
> cursor over the resultset.
> If you want to confirm the runtime execution plan, you can inject a SQL
> Trace script into the actual command to log execution plans for the
current
> SPID. This isn't code you'd want to leave permanently in production, just
> something you'd add temporarily to get the precise plans during run-time.
To
> do this, you'd create the trace in profiler & use the File / Script Trace
> menu to generate the script. You'd have to manually add the SPID filter
via
> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
> page or Win32 app, you might extract it to a stored proc first so that you
> can add the trace codemore easily. Sure, this is jumping through a lot of
> hoops (it regularly annoys me that this is so hard to do) but at least
there
> is a way to get the job done.
> Regards,
> Greg Linwood
> SQL Server MVP
> "Jerad Rose" <no@.spam.com> wrote in message
> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> > Hello.
> >
> > I have an odd scenario. I just swapped out database servers today for a
> > heavily-hit web server. Everything went smoothly for the most part,
> > except I'm having sporadic queries that are taking an excessive amount
of
> > time to execute. Excessive, as in, before the swap, these queries would
> > take < 2 seconds, and now they are timing out (> 30 seconds). Also,
this
> > is only happening for a very small percentage of cases.
> >
> > To make things more confusing, I had it happening to me consistently for
> > about 20 minutes, but when I ran the exact same stored proc in Query
> > Analyzer, it would run just fine.
> >
> > So I try to trace it in Profiler. The results were very odd. As
> > expected, when hit from the website (ADO), my query was timing out
> > (Duration ~30000), while the trace results from QA were 1-2 seconds.
But
> > my reads from ADO were HUGE -- like in the millions -- whereas the reads
> > from QA were only in the thousdands.
> >
> > What in the world would cause my reads to get so high, just by hitting
it
> > from ADO? As I said, it's the exact same stored procedure and
parameters
> > (I actually copied the TextData from Profiler and pasted it into QA).
> >
> > I'm assuming that the execution plan between the two is different, but I
> > have no idea how to capture the execution plan from my particular
> > problem-causing query from the web (without getting flooded w/ all the
> > other queries running). I was able to capture my QA execution plan in
> > Profile by filtering on SPID, but my SPID from ADO changes constantly.
> >
> > Oh, by the way, after beating my head against the wall trying to figure
> > this out, my query from the web started behaving normally, so my
> > troubleshooting had to come to an end.
> >
> > I'm very confused. Please, any suggestions would be appreciated.
> >
> > Thanks.
> > Jerad
> >
>|||A client cursor means that the client application downloads the entire
resultset before moving (eg rs.movenext or dr.read) over the rows. With a
server-side cursor, the client application brings dow rows one at a time
during rs.movenext or dr.read operations.
You can tell the difference by checking ADO's CursorLocation properties on
the resultset or command objects.
btw, queries executed over a client-side cursor can still have much longer
duration than the same queries executed directly on the server via Query
Analyser because the resultset needs to be sent back over the network to the
client application. If the application stalls the process of reading the
resultset for any reason (the cliient pc might be busy running multiple
applications for example), this reflects in the duration value..
HTH
Regards,
Greg Linwood
SQL Server MVP
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:%23eRt9AdhFHA.1252@.TK2MSFTNGP09.phx.gbl...
> How can you tell if its a client or server side cursor ? Whats the
> difference between the two ?
> "Greg Linwood" <g_linwoodQhotmail.com> wrote in message
> news:%23BfqHwZhFHA.3316@.TK2MSFTNGP14.phx.gbl...
>> Are you using a client side or server side cursor from ADO? This can have
> a
>> huge impact on the number of reads and the duration, even where the same
>> execution plan is used as the Query Analyser. Why? Extra reads can come
> from
>> the cursoring operations & extra duration from the overall time lapsed to
>> cursor over the resultset.
>> If you want to confirm the runtime execution plan, you can inject a SQL
>> Trace script into the actual command to log execution plans for the
> current
>> SPID. This isn't code you'd want to leave permanently in production, just
>> something you'd add temporarily to get the precise plans during run-time.
> To
>> do this, you'd create the trace in profiler & use the File / Script Trace
>> menu to generate the script. You'd have to manually add the SPID filter
> via
>> a parameter you'd pick up at runtime.. If your SQL's embedded into an ASP
>> page or Win32 app, you might extract it to a stored proc first so that
>> you
>> can add the trace codemore easily. Sure, this is jumping through a lot of
>> hoops (it regularly annoys me that this is so hard to do) but at least
> there
>> is a way to get the job done.
>> Regards,
>> Greg Linwood
>> SQL Server MVP
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
>> > Hello.
>> >
>> > I have an odd scenario. I just swapped out database servers today for
>> > a
>> > heavily-hit web server. Everything went smoothly for the most part,
>> > except I'm having sporadic queries that are taking an excessive amount
> of
>> > time to execute. Excessive, as in, before the swap, these queries
>> > would
>> > take < 2 seconds, and now they are timing out (> 30 seconds). Also,
> this
>> > is only happening for a very small percentage of cases.
>> >
>> > To make things more confusing, I had it happening to me consistently
>> > for
>> > about 20 minutes, but when I ran the exact same stored proc in Query
>> > Analyzer, it would run just fine.
>> >
>> > So I try to trace it in Profiler. The results were very odd. As
>> > expected, when hit from the website (ADO), my query was timing out
>> > (Duration ~30000), while the trace results from QA were 1-2 seconds.
> But
>> > my reads from ADO were HUGE -- like in the millions -- whereas the
>> > reads
>> > from QA were only in the thousdands.
>> >
>> > What in the world would cause my reads to get so high, just by hitting
> it
>> > from ADO? As I said, it's the exact same stored procedure and
> parameters
>> > (I actually copied the TextData from Profiler and pasted it into QA).
>> >
>> > I'm assuming that the execution plan between the two is different, but
>> > I
>> > have no idea how to capture the execution plan from my particular
>> > problem-causing query from the web (without getting flooded w/ all the
>> > other queries running). I was able to capture my QA execution plan in
>> > Profile by filtering on SPID, but my SPID from ADO changes constantly.
>> >
>> > Oh, by the way, after beating my head against the wall trying to figure
>> > this out, my query from the web started behaving normally, so my
>> > troubleshooting had to come to an end.
>> >
>> > I'm very confused. Please, any suggestions would be appreciated.
>> >
>> > Thanks.
>> > Jerad
>> >
>>
>|||Thanks so much for all the great replies.
It's late, so I have to make this quick.
. I was able to reproduce the slow execution through QA, so I don't think
it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in QA.
Sometimes it's slow in both (or one or the other). I think this may also
rule out client-side vs. server-side cursors.
. I did run sp_updatestats, but that didn't seem to help much
. I tried WITH RECOMPILE, but didn't seem to help much
. I managed to capture the execution plan for when it runs fast (< 2
seconds) vs. when it times out (> 30 seconds).
Here are the links to those (top lists stats):
http://www.animalcrossingcommunity.com/slowplan.txt
http://www.animalcrossingcommunity.com/fastplan.txt
Here is the actual stored proc that is run (this is actually a snippet --
some stuff runs afterwards, but I'm 99% this part is what is causing the
problems):
http://www.animalcrossingcommunity.com/query.txt
Thanks again for all your help.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:e2KdzyQhFHA.2644@.TK2MSFTNGP09.phx.gbl...
> Hello.
> I have an odd scenario. I just swapped out database servers today for a
> heavily-hit web server. Everything went smoothly for the most part,
> except I'm having sporadic queries that are taking an excessive amount of
> time to execute. Excessive, as in, before the swap, these queries would
> take < 2 seconds, and now they are timing out (> 30 seconds). Also, this
> is only happening for a very small percentage of cases.
> To make things more confusing, I had it happening to me consistently for
> about 20 minutes, but when I ran the exact same stored proc in Query
> Analyzer, it would run just fine.
> So I try to trace it in Profiler. The results were very odd. As
> expected, when hit from the website (ADO), my query was timing out
> (Duration ~30000), while the trace results from QA were 1-2 seconds. But
> my reads from ADO were HUGE -- like in the millions -- whereas the reads
> from QA were only in the thousdands.
> What in the world would cause my reads to get so high, just by hitting it
> from ADO? As I said, it's the exact same stored procedure and parameters
> (I actually copied the TextData from Profiler and pasted it into QA).
> I'm assuming that the execution plan between the two is different, but I
> have no idea how to capture the execution plan from my particular
> problem-causing query from the web (without getting flooded w/ all the
> other queries running). I was able to capture my QA execution plan in
> Profile by filtering on SPID, but my SPID from ADO changes constantly.
> Oh, by the way, after beating my head against the wall trying to figure
> this out, my query from the web started behaving normally, so my
> troubleshooting had to come to an end.
> I'm very confused. Please, any suggestions would be appreciated.
> Thanks.
> Jerad
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
> Thanks so much for all the great replies.
> It's late, so I have to make this quick.
> . I was able to reproduce the slow execution through QA, so I don't think
> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
> QA. Sometimes it's slow in both (or one or the other). I think this may
> also rule out client-side vs. server-side cursors.
> . I did run sp_updatestats, but that didn't seem to help much
> . I tried WITH RECOMPILE, but didn't seem to help much
> . I managed to capture the execution plan for when it runs fast (< 2
> seconds) vs. when it times out (> 30 seconds).
> Here are the links to those (top lists stats):
> http://www.animalcrossingcommunity.com/slowplan.txt
> http://www.animalcrossingcommunity.com/fastplan.txt
> Here is the actual stored proc that is run (this is actually a snippet --
> some stuff runs afterwards, but I'm 99% this part is what is causing the
> problems):
> http://www.animalcrossingcommunity.com/query.txt
> Thanks again for all your help.
>
Good. Now post the table DDL with indexes and a an estimate of the number
of rows in each table.
This is a hard one, and the best plan in cases like this is often to look
beyond these two plans and see if changes to the indexing structure or query
formulation can result in a much cheaper (and therefore more stable) plan.
David
David|||Ok, here you go:
http://www.animalcrossingcommunity.com/ddl.txt
Here are some more clues. Before I swapped servers (basically same specs,
but new has Raid1 configuration and old just had one HD), this particular
query gave me virtually no problems. It typically ran in under a second,
seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
or more seconds -- at worst, 30+ seconds.
Thanks again for taking time to help. If there's any other info I can
provide to help diagnosis, let me know.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:urYuhAjhFHA.1048@.tk2msftngp13.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23l200yehFHA.328@.tk2msftngp13.phx.gbl...
>> Thanks so much for all the great replies.
>> It's late, so I have to make this quick.
>> . I was able to reproduce the slow execution through QA, so I don't think
>> it's necessarily a QA vs. ADO thing. Sometimes it runs ok in ADO and in
>> QA. Sometimes it's slow in both (or one or the other). I think this may
>> also rule out client-side vs. server-side cursors.
>> . I did run sp_updatestats, but that didn't seem to help much
>> . I tried WITH RECOMPILE, but didn't seem to help much
>> . I managed to capture the execution plan for when it runs fast (< 2
>> seconds) vs. when it times out (> 30 seconds).
>> Here are the links to those (top lists stats):
>> http://www.animalcrossingcommunity.com/slowplan.txt
>> http://www.animalcrossingcommunity.com/fastplan.txt
>> Here is the actual stored proc that is run (this is actually a snippet --
>> some stuff runs afterwards, but I'm 99% this part is what is causing the
>> problems):
>> http://www.animalcrossingcommunity.com/query.txt
>> Thanks again for all your help.
> Good. Now post the table DDL with indexes and a an estimate of the number
> of rows in each table.
> This is a hard one, and the best plan in cases like this is often to look
> beyond these two plans and see if changes to the indexing structure or
> query formulation can result in a much cheaper (and therefore more stable)
> plan.
> David
> David
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
> Ok, here you go:
> http://www.animalcrossingcommunity.com/ddl.txt
> Here are some more clues. Before I swapped servers (basically same specs,
> but new has Raid1 configuration and old just had one HD), this particular
> query gave me virtually no problems. It typically ran in under a second,
> seemingly averaging about 0.5 seconds. Now, even at best, I'm getting 1-2
> or more seconds -- at worst, 30+ seconds.
> Thanks again for taking time to help. If there's any other info I can
> provide to help diagnosis, let me know.
> Jerad
>
No smoking gun, but here's some things to try:
--ThreadID is the leading column in the clustered index. This index is
duplicative.
drop index ThreadUser.IX_ThreadUser
--Foregn keys should be indexed, and linking tables should be indexed
--for 2-way access
create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
--Try a compund index to support this search
create index IX_ThreadSearch on Thread(ThreadModifyDateTime
desc,ThreadIsDeleted,ThreadIsLocked)
--and modify the procedure to not suppress index usage on
ThreadModifyDateTime
--change
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
getdate())
--to
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
datediff(d,-1,getdate()))
David|||Wow.
Ok, so I have NO idea why I didn't already have UserID indexed on
PrivateThread. That could probably explain most of the issues -- and why I
was approaching reads in the millions. Good catch. The others made perfect
sense as well.
So, I've got all of your suggestions implmeneted, and so far it seems to
have made a big difference. So much thanks again for your time and help!
My only confusion now, is, why this didn't seem to be a problem on the old
server. Especially considering my lack of an index for UserID on
PrivateThread, I'm surprised this didn't give me trouble before.
I've been running a trace, and so far, reads have not exceeded 10,000 and
duration is less than a second except on a few rare occasions where it hits
a few seconds -- so that seems to have fixed my problem so far. Now I'm a
little concerned of how many other spots have inadequate indexing.
Thanks again David (and others) for your time and help. It has been much
appreciated.
Jerad
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
> "Jerad Rose" <no@.spam.com> wrote in message
> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
> No smoking gun, but here's some things to try:
> --ThreadID is the leading column in the clustered index. This index is
> duplicative.
> drop index ThreadUser.IX_ThreadUser
> --Foregn keys should be indexed, and linking tables should be indexed
> --for 2-way access
> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
> --Try a compund index to support this search
> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
> desc,ThreadIsDeleted,ThreadIsLocked)
> --and modify the procedure to not suppress index usage on
> ThreadModifyDateTime
> --change
> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
> getdate())
> --to
> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
> datediff(d,-1,getdate()))
> David
>|||>> I've been running a trace, and so far, reads have not exceeded 10,000 and
Is that for a single query? If so then you aren't done yet<g>. Unless you
are pulling back thousands of rows (which you should question why) you
should be able to do this in a few hundred reads at best with proper
indexing and WHERE clauses. There are exceptions but that many reads
indicates either a full or partial scan. You should find out if that is the
properly indexed as well.
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
> Wow.
> Ok, so I have NO idea why I didn't already have UserID indexed on
> PrivateThread. That could probably explain most of the issues -- and why
> I was approaching reads in the millions. Good catch. The others made
> perfect sense as well.
> So, I've got all of your suggestions implmeneted, and so far it seems to
> have made a big difference. So much thanks again for your time and help!
> My only confusion now, is, why this didn't seem to be a problem on the old
> server. Especially considering my lack of an index for UserID on
> PrivateThread, I'm surprised this didn't give me trouble before.
> I've been running a trace, and so far, reads have not exceeded 10,000 and
> duration is less than a second except on a few rare occasions where it
> hits a few seconds -- so that seems to have fixed my problem so far. Now
> I'm a little concerned of how many other spots have inadequate indexing.
> Thanks again David (and others) for your time and help. It has been much
> appreciated.
> Jerad
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>|||Hi Andrew.
Well, no, it's a few queries. I have one query that loads ThreadIDs into a
temp table, which I later hit for paging. For example, the first query (the
one we've been dealing with) pulls all threads for a particular user, and
stores the ID along w/ a sequential identity column into a temp table.
Then, I have a second query that joins the temp table with the rest of my
tables to return a set of records -- say, records 201-250 (page 5). So, the
first query could potentially return thousands of records.
I'm not going to say that I still don't have some inadequate indexing, but
I'm much happier with the results I have now than what I was getting before.
But feel free to offer any other suggestions, based on the info in the above
links. If you need additional info other than what has been posted, let me
know.
Thanks again.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eB3rR7jhFHA.1164@.TK2MSFTNGP10.phx.gbl...
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and
> Is that for a single query? If so then you aren't done yet<g>. Unless
> you are pulling back thousands of rows (which you should question why) you
> should be able to do this in a few hundred reads at best with proper
> indexing and WHERE clauses. There are exceptions but that many reads
> indicates either a full or partial scan. You should find out if that is
> the properly indexed as well.
>
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and why
>> I was approaching reads in the millions. Good catch. The others made
>> perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000 and
>> duration is less than a second except on a few rare occasions where it
>> hits a few seconds -- so that seems to have fixed my problem so far. Now
>> I'm a little concerned of how many other spots have inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been much
>> appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>|||Ok, another update.
Unfortunately, it looks like this only helped temporarily. It's almost as
if I perform some sort of "reset" (or maybe updating the statistics), and it
works for a while, but then my indexes get outdated and start requiring
several reads once again.
I just had one trace that hit over 11 million reads. This just doesn't make
sense when 30 minutes ago, this exact same query only had a couple thousand
reads.
Stiff confused (and frustrated).
Thanks again for any suggestions.
Jerad
"Jerad Rose" <no@.spam.com> wrote in message
news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
> Wow.
> Ok, so I have NO idea why I didn't already have UserID indexed on
> PrivateThread. That could probably explain most of the issues -- and why
> I was approaching reads in the millions. Good catch. The others made
> perfect sense as well.
> So, I've got all of your suggestions implmeneted, and so far it seems to
> have made a big difference. So much thanks again for your time and help!
> My only confusion now, is, why this didn't seem to be a problem on the old
> server. Especially considering my lack of an index for UserID on
> PrivateThread, I'm surprised this didn't give me trouble before.
> I've been running a trace, and so far, reads have not exceeded 10,000 and
> duration is less than a second except on a few rare occasions where it
> hits a few seconds -- so that seems to have fixed my problem so far. Now
> I'm a little concerned of how many other spots have inadequate indexing.
> Thanks again David (and others) for your time and help. It has been much
> appreciated.
> Jerad
> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>|||The number of rows affected is one of the criteria for the use of an index
as a scan or a seek. If your query is not reusing a query plan it will
generate a new one each time you call it. It uses the values you supply to
determine what the plan should be. If this time the number of rows was too
much for a seek it might choose to do a scan. Some of the values for that
query will be higher than others in the number of rows returned. I haven't
spent a lot of time looking at your query and DDL but that join is fairly
complex and has several LEFT JOINS in it that probably don't help. Which
table is doing the most reads? You might want to be sure there is a
clustered index on the column that you are doing the join or range scan on.
--
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
> Ok, another update.
> Unfortunately, it looks like this only helped temporarily. It's almost as
> if I perform some sort of "reset" (or maybe updating the statistics), and
> it works for a while, but then my indexes get outdated and start requiring
> several reads once again.
> I just had one trace that hit over 11 million reads. This just doesn't
> make sense when 30 minutes ago, this exact same query only had a couple
> thousand reads.
> Stiff confused (and frustrated).
> Thanks again for any suggestions.
> Jerad
> "Jerad Rose" <no@.spam.com> wrote in message
> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and why
>> I was approaching reads in the millions. Good catch. The others made
>> perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000 and
>> duration is less than a second except on a few rare occasions where it
>> hits a few seconds -- so that seems to have fixed my problem so far. Now
>> I'm a little concerned of how many other spots have inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been much
>> appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>|||I didn't provide the entire stored proc, as I didn't want to muddy the
waters with too much information. But as I think about it, it may have to
do with the fact that I have several sets of "first queries", that are ran
based on the parameters (namely BoardID). Maybe that's causing the
execution plan to be inneficient.
Here is the link to the entire proc, and you'll see what I'm talking about:
http://www.animalcrossingcommunity.com/proc.txt
I did this to prevent from having to have redundant code in many procs, but
I will much rather have redundant code if it meant better performance.
Take a look at that and let me know if you think I'm on the right track.
Thanks again.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
> The number of rows affected is one of the criteria for the use of an index
> as a scan or a seek. If your query is not reusing a query plan it will
> generate a new one each time you call it. It uses the values you supply to
> determine what the plan should be. If this time the number of rows was
> too much for a seek it might choose to do a scan. Some of the values for
> that query will be higher than others in the number of rows returned. I
> haven't spent a lot of time looking at your query and DDL but that join is
> fairly complex and has several LEFT JOINS in it that probably don't help.
> Which table is doing the most reads? You might want to be sure there is a
> clustered index on the column that you are doing the join or range scan
> on.
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the statistics),
>> and it works for a while, but then my indexes get outdated and start
>> requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems to
>> have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions where
>> it hits a few seconds -- so that seems to have fixed my problem so far.
>> Now I'm a little concerned of how many other spots have inadequate
>> indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I can
>> provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>|||"Jerad Rose" <no@.spam.com> wrote in message
news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>I didn't provide the entire stored proc, as I didn't want to muddy the
>waters with too much information. But as I think about it, it may have to
>do with the fact that I have several sets of "first queries", that are ran
>based on the parameters (namely BoardID). Maybe that's causing the
>execution plan to be inneficient.
> Here is the link to the entire proc, and you'll see what I'm talking
> about:
> http://www.animalcrossingcommunity.com/proc.txt
> I did this to prevent from having to have redundant code in many procs,
> but I will much rather have redundant code if it meant better performance.
> Take a look at that and let me know if you think I'm on the right track.
> Thanks again.
> Jerad
>
Ok, I would further break down the problem query by @.BoardID since the
query logic differs in the two cases. That way SQL doesn't have to use the
same plan for both cases.
else if @.BoardID = 63
insert into @.TempTable(RowID)
select Thread.ThreadID
from Thread (nolock)
join Users (nolock)
on Users.UserID = Thread.UserID
join UserType
on UserType.UserTypeID = @.UserTypeID
left join (
select ThreadID
, min(UserTypeRank) MinUserTypeRank
from PrivateThreadUser (nolock)
join Users (nolock)
on Users.UserID = PrivateThreadUser.UserID
join UserType
on UserType.UserTypeID = Users.UserTypeID
group by ThreadID
) PrivateThreadUserMinRank
on PrivateThreadUserMinRank.ThreadID = Thread.ThreadID
left join PrivateThreadUser (nolock)
on PrivateThreadUser.ThreadID = Thread.ThreadID
and PrivateThreadUser.UserID = @.UserID
left join ThreadUser (nolock)
on ThreadUser.ThreadID = Thread.ThreadID
and ThreadUser.UserID = @.UserID
where
Thread.BoardID = 63
and Thread.ThreadIsDeleted = 0
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
dateadd(d,-1,getdate()))
and (ThreadUser.ThreadIsDeleted is null or ThreadUser.ThreadIsDeleted = 0 )
order by
Thread.ThreadIsSticky desc
, Thread.ThreadLastPostDateTime desc
else if @.BoardID = 53
insert into @.TempTable(RowID)
select Thread.ThreadID
from Thread (nolock)
join Users (nolock)
on Users.UserID = Thread.UserID
join UserType
on UserType.UserTypeID = @.UserTypeID
left join (
select ThreadID
, min(UserTypeRank) MinUserTypeRank
from PrivateThreadUser (nolock)
join Users (nolock)
on Users.UserID = PrivateThreadUser.UserID
join UserType
on UserType.UserTypeID = Users.UserTypeID
group by ThreadID
) PrivateThreadUserMinRank
on PrivateThreadUserMinRank.ThreadID = Thread.ThreadID
left join PrivateThreadUser (nolock)
on PrivateThreadUser.ThreadID = Thread.ThreadID
and PrivateThreadUser.UserID = @.UserID
left join ThreadUser (nolock)
on ThreadUser.ThreadID = Thread.ThreadID
and ThreadUser.UserID = @.UserID
where
Thread.BoardID = 36
and Thread.ThreadIsDeleted = 0
and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
dateadd(d,-1,getdate()))
and (PrivateThreadUser.ThreadID is not null or @.BoardID <> 36)
and (PrivateThreadUserMinRank.MinUserTypeRank > UserType.UserTypeRank)
order by
Thread.ThreadIsSticky desc
, Thread.ThreadLastPostDateTime desc
David|||Wow, Well I don't know what to tell you from here. There is no way for me
to tell how efficient these queries are without actually being connected to
your system. Each one of those IF blocks can certainly have a different
query plan and it gets quite convoluted when all lumped together in one sp.
You might consider breaking them into individual sp's and calling the
appropriate one to begin with. Do you really need to do that whole join
twice? Can't you grab what info is relevant the first time and avoid some
of the joins in the second pass?
--
Andrew J. Kelly SQL MVP
"Jerad Rose" <no@.spam.com> wrote in message
news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>I didn't provide the entire stored proc, as I didn't want to muddy the
>waters with too much information. But as I think about it, it may have to
>do with the fact that I have several sets of "first queries", that are ran
>based on the parameters (namely BoardID). Maybe that's causing the
>execution plan to be inneficient.
> Here is the link to the entire proc, and you'll see what I'm talking
> about:
> http://www.animalcrossingcommunity.com/proc.txt
> I did this to prevent from having to have redundant code in many procs,
> but I will much rather have redundant code if it meant better performance.
> Take a look at that and let me know if you think I'm on the right track.
> Thanks again.
> Jerad
> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
> news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
>> The number of rows affected is one of the criteria for the use of an
>> index as a scan or a seek. If your query is not reusing a query plan it
>> will generate a new one each time you call it. It uses the values you
>> supply to determine what the plan should be. If this time the number of
>> rows was too much for a seek it might choose to do a scan. Some of the
>> values for that query will be higher than others in the number of rows
>> returned. I haven't spent a lot of time looking at your query and DDL
>> but that join is fairly complex and has several LEFT JOINS in it that
>> probably don't help. Which table is doing the most reads? You might want
>> to be sure there is a clustered index on the column that you are doing
>> the join or range scan on.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the statistics),
>> and it works for a while, but then my indexes get outdated and start
>> requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems
>> to have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions where
>> it hits a few seconds -- so that seems to have fixed my problem so far.
>> Now I'm a little concerned of how many other spots have inadequate
>> indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>> Ok, here you go:
>> http://www.animalcrossingcommunity.com/ddl.txt
>> Here are some more clues. Before I swapped servers (basically same
>> specs, but new has Raid1 configuration and old just had one HD), this
>> particular query gave me virtually no problems. It typically ran in
>> under a second, seemingly averaging about 0.5 seconds. Now, even at
>> best, I'm getting 1-2 or more seconds -- at worst, 30+ seconds.
>> Thanks again for taking time to help. If there's any other info I
>> can provide to help diagnosis, let me know.
>> Jerad
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index is
>> duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>>
>|||Hey guys.
I went ahead and split that proc into multiple procs, so that there were no
IF blocks. This seems to have helped a great deal, as this proc is no
longer taking more than a second or two (worst case). It's been running
like this for about a day, so hopefully this was the answer. I'm still
getting a few thousand reads, but I'm not sure I'll be able to get it down
any less than that.
Thanks again for all your help. This has been a good learning experience
for me, as I now feel better equipped to diagnose problem queries and
indexing issues such as this.
Jerad
"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:ODOgF4lhFHA.3436@.tk2msftngp13.phx.gbl...
> Wow, Well I don't know what to tell you from here. There is no way for me
> to tell how efficient these queries are without actually being connected
> to your system. Each one of those IF blocks can certainly have a
> different query plan and it gets quite convoluted when all lumped together
> in one sp. You might consider breaking them into individual sp's and
> calling the appropriate one to begin with. Do you really need to do that
> whole join twice? Can't you grab what info is relevant the first time and
> avoid some of the joins in the second pass?
> --
> Andrew J. Kelly SQL MVP
>
> "Jerad Rose" <no@.spam.com> wrote in message
> news:OhEInElhFHA.1460@.tk2msftngp13.phx.gbl...
>>I didn't provide the entire stored proc, as I didn't want to muddy the
>>waters with too much information. But as I think about it, it may have to
>>do with the fact that I have several sets of "first queries", that are ran
>>based on the parameters (namely BoardID). Maybe that's causing the
>>execution plan to be inneficient.
>> Here is the link to the entire proc, and you'll see what I'm talking
>> about:
>> http://www.animalcrossingcommunity.com/proc.txt
>> I did this to prevent from having to have redundant code in many procs,
>> but I will much rather have redundant code if it meant better
>> performance.
>> Take a look at that and let me know if you think I'm on the right track.
>> Thanks again.
>> Jerad
>> "Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
>> news:eXF4ltkhFHA.1968@.TK2MSFTNGP14.phx.gbl...
>> The number of rows affected is one of the criteria for the use of an
>> index as a scan or a seek. If your query is not reusing a query plan it
>> will generate a new one each time you call it. It uses the values you
>> supply to determine what the plan should be. If this time the number of
>> rows was too much for a seek it might choose to do a scan. Some of the
>> values for that query will be higher than others in the number of rows
>> returned. I haven't spent a lot of time looking at your query and DDL
>> but that join is fairly complex and has several LEFT JOINS in it that
>> probably don't help. Which table is doing the most reads? You might
>> want to be sure there is a clustered index on the column that you are
>> doing the join or range scan on.
>> --
>> Andrew J. Kelly SQL MVP
>>
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:uLgXBHkhFHA.3300@.TK2MSFTNGP15.phx.gbl...
>> Ok, another update.
>> Unfortunately, it looks like this only helped temporarily. It's almost
>> as if I perform some sort of "reset" (or maybe updating the
>> statistics), and it works for a while, but then my indexes get outdated
>> and start requiring several reads once again.
>> I just had one trace that hit over 11 million reads. This just doesn't
>> make sense when 30 minutes ago, this exact same query only had a couple
>> thousand reads.
>> Stiff confused (and frustrated).
>> Thanks again for any suggestions.
>> Jerad
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:%23WPIMyjhFHA.3448@.TK2MSFTNGP12.phx.gbl...
>> Wow.
>> Ok, so I have NO idea why I didn't already have UserID indexed on
>> PrivateThread. That could probably explain most of the issues -- and
>> why I was approaching reads in the millions. Good catch. The others
>> made perfect sense as well.
>> So, I've got all of your suggestions implmeneted, and so far it seems
>> to have made a big difference. So much thanks again for your time and
>> help!
>> My only confusion now, is, why this didn't seem to be a problem on the
>> old server. Especially considering my lack of an index for UserID on
>> PrivateThread, I'm surprised this didn't give me trouble before.
>> I've been running a trace, and so far, reads have not exceeded 10,000
>> and duration is less than a second except on a few rare occasions
>> where it hits a few seconds -- so that seems to have fixed my problem
>> so far. Now I'm a little concerned of how many other spots have
>> inadequate indexing.
>> Thanks again David (and others) for your time and help. It has been
>> much appreciated.
>> Jerad
>> "David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
>> message news:%23oWyIhjhFHA.320@.TK2MSFTNGP09.phx.gbl...
>> "Jerad Rose" <no@.spam.com> wrote in message
>> news:OOfwlLjhFHA.2072@.TK2MSFTNGP14.phx.gbl...
>>> Ok, here you go:
>>>
>>> http://www.animalcrossingcommunity.com/ddl.txt
>>>
>>> Here are some more clues. Before I swapped servers (basically same
>>> specs, but new has Raid1 configuration and old just had one HD),
>>> this particular query gave me virtually no problems. It typically
>>> ran in under a second, seemingly averaging about 0.5 seconds. Now,
>>> even at best, I'm getting 1-2 or more seconds -- at worst, 30+
>>> seconds.
>>>
>>> Thanks again for taking time to help. If there's any other info I
>>> can provide to help diagnosis, let me know.
>>>
>>> Jerad
>>>
>> No smoking gun, but here's some things to try:
>> --ThreadID is the leading column in the clustered index. This index
>> is duplicative.
>> drop index ThreadUser.IX_ThreadUser
>> --Foregn keys should be indexed, and linking tables should be indexed
>> --for 2-way access
>> create index IX_PrivateThreadUser_User on PrivateThreaduser(UserID)
>> --Try a compund index to support this search
>> create index IX_ThreadSearch on Thread(ThreadModifyDateTime
>> desc,ThreadIsDeleted,ThreadIsLocked)
>> --and modify the procedure to not suppress index usage on
>> ThreadModifyDateTime
>> --change
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime + 1 >
>> getdate())
>> --to
>> and (Thread.ThreadIsLocked = 0 or Thread.ThreadModifyDateTime >
>> datediff(d,-1,getdate()))
>> David
>>
>>
>>
>>
>