Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Friday, March 23, 2012

query for performance monitoring

Hi All

In Oracle i can get Performance varables like Library Cache Hits, Dictionary Cache Hits, Database Buffers Read ,Redolog Buffers Read etc from the system dynamic tables.

I want to know how to get the same / related performance details in sql server 2000 and 2005. ( which are the parameters , Optimal value and which table/dynamic view to query).

Thanks in Advance

You can use dynamic management views, new to 2005.

http://msdn2.microsoft.com/en-us/library/ms188754.aspx|||

Hi,

In SQL 2000 (and 2005), you can query master.dbo.sysperfinfo to find most of what your looking for.

Additionally, sp_monitor and some DBCC commands have additional information, see http://www.sql-server-performance.com/dbcc_commands.asp and http://www.sqldev.net/articles/dbcc_sqlperf.htm for more details.

Hope that helps.

Jamie

|||

In addition to the other references already provided, you might find the examples and information in this white paper useful: http://www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx

Regards,

Gail

|||

Hi .... Thanks for your reply........

I need a query/queries to find the following in MS sql server.

1) To find most frequently accessed/executed tables/procedures

Please help me

Thanks in advance

|||

You might find that exploring the dynamic system views to be useful.

For example:

SELECT * FROM sys.dm_os_performance_counters

And there are quite a few additional dynamic system views to choose from. In Object Explorer, click on your database, VIEWS, and then System Views.

Often, the quality of the responses received is related to our ability to ‘bounce’ ideas off of each other. In the future, to make it easier for us to offer you assistance, and to prevent folks from wasting time on already answered questions, please don't post to multiple newsgroups. Choose the one that best fits your question and post there. Only post to another newsgroup if you get no answer in a day or two (or if you accidentally posted to the wrong newsgroup –and you indicate that you've already posted elsewhere).

|||

Check the response in your duplicate post in the Transact-SQL forum.

Often, the quality of the responses received is related to our ability to ‘bounce’ ideas off of each other. In the future, to make it easier for us to offer you assistance, and to prevent folks from wasting time on already answered questions, please:

Don't post to multiple newsgroups. Choose the one that best fits your question and post there. Only post to another newsgroup if you get no answer in a day or two (or if you accidentally posted to the wrong newsgroup –and you indicate that you've already posted elsewhere).

|||

Hi Arnie...

I made the post in the other forum by mistake ......

thanks for the reminder

|||

can u send me the reply link for ur question

I need a query/queries to find the following in MS sql server.

1) To find most frequently accessed/executed tables/procedures

sql

Tuesday, March 20, 2012

Query exec time of 2005 vs. 2000.

Greetings. We are going to begin moving our 2000 DB's to 2005 shortly and I
was wondering about performance. From reading a few posts here, some claim
that 2005 is showing slower performance than 2000. After answering the
"update stats" question, they are then referred to the exec plans. From there
all correspondence stops, leaving guys like me to wonder the outcome.
So, have those of you that have made the move seen degraded performance in
query times (excluding needing to update the stats)? If so, what was
determined to be the issue? I have 70+ DB's to move, the idea of replaying
profiler traces to do speed comparisons for each one sounds tragic. As does
looking at lots of different exec plans after the move.
TIA, ChrisRChrisR,
As a best practice we always re-index, at least our Production, databases if
we are moving up a version, 2000 to 2005, or installing a new service pack.
This should update the stats and help get execution plans more geared to the
newer version.
What build of SQL2005 are you going to go to? The higher builds start to
look at the TokenAndPermUserStore memory as a problem. See this blog: -
http://sqlblogcasts.com/blogs/grumpyolddba/archive/2007/07/19/more-tokenandpermuserstore.aspx
MS are introducing cumulative hot fixes post SP2 build 3042 so watch for
these and the problems they have fixed: -
http://support.microsoft.com/kb/937137
Back to your original question as always a lot depends on your application
code and the hardware that you run with. HyperThreaded servers are still not
the best: -
http://sqlblog.com/blogs/kevin_kline/archive/2007/08/18/the-perils-of-hyperthreading-for-sql-server.aspx
Good luck
Chris
"ChrisR" <ChrisR@.discussions.microsoft.com> wrote in message
news:0CA05B48-E696-4EB5-A383-BDBACCD5546C@.microsoft.com...
> Greetings. We are going to begin moving our 2000 DB's to 2005 shortly and
> I
> was wondering about performance. From reading a few posts here, some claim
> that 2005 is showing slower performance than 2000. After answering the
> "update stats" question, they are then referred to the exec plans. From
> there
> all correspondence stops, leaving guys like me to wonder the outcome.
> So, have those of you that have made the move seen degraded performance in
> query times (excluding needing to update the stats)? If so, what was
> determined to be the issue? I have 70+ DB's to move, the idea of replaying
> profiler traces to do speed comparisons for each one sounds tragic. As
> does
> looking at lots of different exec plans after the move.
> TIA, ChrisR

Friday, March 9, 2012

query datetime range, how to optimize

Hi,
My query 1 needs to narrow down a time range. I index the column
(pos_trandate_time, plus the other on the where clause), but the performance
still very slow, around 1 minutes. Wherease the query on date only is much
faster. What should I do?
--query 1, cost 99%
select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
pos_trandate_time, 8) as tx_time, pos_purchase_amt as tx_amt,
pos_cashback_amt as tx_cashback_amt, pos_seq_no as tx_seq_no, pos_msg_type
as tx_msg_type, pos_trans_code as tx_tran_type
from pos_txn_log (index=pk_pos_txn_log)
where pos_tid ='80007001' and pos_trandate_time >= '2004-07-05 00:00:00'
and pos_trandate_time <= '2004-07-05 23:59:59'
order by pos_trandate_time
-- query 2, cost %1
select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
pos_trandate_time, 8) as tx_time,
pos_purchase_amt as tx_amt, pos_cashback_amt as tx_cashback_amt, pos_seq_no
as tx_seq_no, pos_msg_type as tx_msg_type,
pos_trans_code as tx_tran_type
from pos_txn_log where pos_tid ='80007001' and pos_rid ='11180007000' and
CONVERT(char(8), pos_settlementdate, 3) = '05/07/04'
order by pos_trandate_time
Thanks
WangWhat happens when you take OFF the index hint
"from pos_txn_log (index=pk_pos_txn_log) "
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Andrew" <Andrew@.discussions.microsoft.com> wrote in message
news:9A2A9227-8111-4788-B4DA-C96848AEF958@.microsoft.com...
> Hi,
> My query 1 needs to narrow down a time range. I index the column
> (pos_trandate_time, plus the other on the where clause), but the
performance
> still very slow, around 1 minutes. Wherease the query on date only is much
> faster. What should I do?
> --query 1, cost 99%
> select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
> pos_trandate_time, 8) as tx_time, pos_purchase_amt as tx_amt,
> pos_cashback_amt as tx_cashback_amt, pos_seq_no as tx_seq_no, pos_msg_type
> as tx_msg_type, pos_trans_code as tx_tran_type
> from pos_txn_log (index=pk_pos_txn_log)
> where pos_tid ='80007001' and pos_trandate_time >= '2004-07-05 00:00:00'
> and pos_trandate_time <= '2004-07-05 23:59:59'
> order by pos_trandate_time
> -- query 2, cost %1
> select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
> pos_trandate_time, 8) as tx_time,
> pos_purchase_amt as tx_amt, pos_cashback_amt as tx_cashback_amt,
pos_seq_no
> as tx_seq_no, pos_msg_type as tx_msg_type,
> pos_trans_code as tx_tran_type
> from pos_txn_log where pos_tid ='80007001' and pos_rid ='11180007000'
and
> CONVERT(char(8), pos_settlementdate, 3) = '05/07/04'
> order by pos_trandate_time
>
> Thanks
> Wang|||Almost no difference. I tested.
I actully use CONVERT(dt_field) function in the where clause, since dt_field
is an indexed field, this may cuase query not using index. So the performace
should be bad, but actually testing shows it is much faster, Strange!!
Thanks
Andrew
"Wayne Snyder" wrote:
> What happens when you take OFF the index hint
> "from pos_txn_log (index=pk_pos_txn_log) "
>
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Andrew" <Andrew@.discussions.microsoft.com> wrote in message
> news:9A2A9227-8111-4788-B4DA-C96848AEF958@.microsoft.com...
> > Hi,
> >
> > My query 1 needs to narrow down a time range. I index the column
> > (pos_trandate_time, plus the other on the where clause), but the
> performance
> > still very slow, around 1 minutes. Wherease the query on date only is much
> > faster. What should I do?
> >
> > --query 1, cost 99%
> > select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
> > pos_trandate_time, 8) as tx_time, pos_purchase_amt as tx_amt,
> > pos_cashback_amt as tx_cashback_amt, pos_seq_no as tx_seq_no, pos_msg_type
> > as tx_msg_type, pos_trans_code as tx_tran_type
> > from pos_txn_log (index=pk_pos_txn_log)
> > where pos_tid ='80007001' and pos_trandate_time >= '2004-07-05 00:00:00'
> > and pos_trandate_time <= '2004-07-05 23:59:59'
> > order by pos_trandate_time
> >
> > -- query 2, cost %1
> > select CONVERT(char(8), pos_trandate_time, 3) as tx_date, CONVERT(char(8),
> > pos_trandate_time, 8) as tx_time,
> > pos_purchase_amt as tx_amt, pos_cashback_amt as tx_cashback_amt,
> pos_seq_no
> > as tx_seq_no, pos_msg_type as tx_msg_type,
> > pos_trans_code as tx_tran_type
> > from pos_txn_log where pos_tid ='80007001' and pos_rid ='11180007000'
> and
> > CONVERT(char(8), pos_settlementdate, 3) = '05/07/04'
> > order by pos_trandate_time
> >
> >
> > Thanks
> > Wang
>
>

Query datetime and its performance

Hi all,

I know I miss something very obvious here, but I can see it at the moment - I am very appreciated if someone could help.

Why am I have a table scan when I run the query below - note created_date is indexed.

DECLARE @.LastDate datetime

SET @.lastDate = (SELECT DATEADD(hh,-1,getdate())

SELECT * FROM Report WHERE Created_Date >@.lastDate

However, I don't have table scan when I do the following query

SELECT * FROM Report WHERE Created_Date > (Getdate()-1)

Thanks in advance

Because you are using SELECT statement to obtain the value. Use the following sentences instead:

DECLARE @.LastDate datetime;

SET @.lastDate = DATEADD(hh,-1,getdate();

SELECT * FROM Report WHERE Created_Date>@.lastDate;

Regards,|||

Thanks for replying Ronald

but nope, tried that - I still have table scan. :-(

BTW I am using SQL 2000

|||

Philly:

I think that the answer is in two parts. First, your SELECT * query requires a bookmark lookup for each row selected by your where clause whenever it uses an nonclustered index that does not include all columns of the table as part of the index -- and that is the case with your index based only on this date. The optimizer "knows" that if there are enough bookmark lookups that it is more efficient to do a table scan than an index scan that includes bookmark lookups. Second, (somebody PLEASE check me on this one) the optimizer has special handling for "getdate" that it does not have for the variable. Therefore, it can "figure out" that there are not enough rows in your GETDATE()-1 query so that the cost of the bookmark lookups do not outweigh the cost of the table scan. It does NOT make this optimization when you compare to your variable.


Dave

|||In this case, the table scan is probably the most effienct way to get the data, according to the query optimizer. Table scans are not always bad.

Your select statement is getting all fields in the table and excluding probably a handful of rows. If it uses an index, it will have to find each matching record in the index, get the PK from the index, look up the PK in the clustered index to get the record. Not using the secondary index, it just reads the table and does the compare and it is done.

Now, if you were doing a "SELECT Created_Date FROM ...." That should use the index, because the data is all contained in the index and doesn't need to reference the table at all.|||

Thanks all for replying,

I guess this is one case that query optimizer try to out smart us, which is not a bad idea in some case. The problem I have is the table I am query from is very large, several million records, so table scan is bad for me. I don't know if Getdate() does some special handling as Dave mentioned, but it definetly handle different than the variable - which is interesting. However, I understand Tom's suggestion is use "SELECT Created_Date FROM..". I tried that and it did works nicely, I wonder if there is other way to force sql use index here instead of including the created_field after the select statement.

Thanks again

|||

You can force SQL Server to use an index. This usually isn't recommended, but may solve your problem. The syntax would be...

SELECT * FROM Report With(Index=IndexNameHere) WHERE Created_Date>@.lastDate;

Query Date Stored as Text

I am using Windows "Performance Logs and Alerts" to log performance statistics to a SQL database. In this case I am using a Counter Log, which creates the necessary tables upon startup of the log. I can see my data pouring into the database and now I am writing some T-SQL statements to query the data and display it on an aspx page. I have one particular problem that is presenting me with a bit of a challenge. I want to be able to query my captured performance counters based on date ranges entered on the aspx page. Unfortunately I am having trouble doing that because the "CounterDateTime" field where the date for each counter sample is stored is a char(24) data type instead of a datetime data type. I'm not sure why MS chose to store dates in a char field but I am not sure how to query that field for date ranges. So my question is, how would one query for a date range when the dates that I'm querying are in char format? Thanks in advance.

Could you post some sample of how the format of the datetime value looks like in the table? If it is some format that retain order in character format for datetime values then you can simply use string comparisons. Best is to convert the column to datetime and then do your manipulations. Of course, performance will suffer if your primary search condition is the datetime column only. For the conversion part, I think you may have to strip some time part. If I remember correctly the value stored by perfmon are higher resolution than what SQL Server currently supports. This is probably one reason why they can't store it in datetime column for example.|||

Here is an example of what the data looks like in the datetime field: 2006-08-15 11:37:17.121

Unfortunately, I doubt that changing the datatype in the table is an option since the table and datatypes are created automatically by the SysMon counter log. My guess is that if I change the structure of the field, the counter log will error out when it tries to log data since the format in the table will not jive with the type of data that the counter log is trying to insert. I don't know if there is a good solution to this problem. Is the CounterDateTime field basically just useless data that one can't query? On my ASP.Net web page, I allow the user to select a start date and an end date. Therefore, in my SQL query I want to select everything that falls in between the two dates. Since the date fields are text, perhaps that is not going to be possible.

Thanks.

|||

It is the ODBC canonical format so you can just use it directly without converting the datetime. This format will preserve order of the values. See examples below:

select * from Northwind.dbo.Orders
where convert(varchar, OrderDate, 121) >= '1996-07-10 00:00:00.000' and
convert(varchar, OrderDate, 121) < '1996-08-01 06:40:00.000'

select * from Northwind.dbo.Orders
where OrderDate >= '1996-07-10 00:00:00.000' and
OrderDate < '1996-08-01 06:40:00.000'

I didn't imply that you should convert the column in the table. Instead you can do a convert at run-time in your query. In any case, it is not really required. You can do your query like:

select *

from perfmon_table

where CounterDateTime >= convert(varchar, @.datetime_start, 121)

and CounterDateTime < convert(varchar, @.datetime_end, 121)

-- or if there is no index on the CounterDateTime column then just do:

select *

from perfmon_table

where cast(CounterDateTime as datetime) >= @.datetime_start

and cast(CounterDateTime as datetime) < @.datetime_end

|||Thanks for your help! That works like a champ!