Showing posts with label second. Show all posts
Showing posts with label second. Show all posts

Monday, March 26, 2012

Query goes from 1 second to 40 minutes with only 1 more record?

I have a very strange problem where I have a simple select count(*) with a handle of of joined tables that works fine and returns a count of records in under 1 second when the criteria in the where clause limits the count to 36984. But if I change the criteria to where the number of records would be 1 higher, 36985, the query takes ~40 minutes!

Here's the query

SELECT count(*)
FROM AT JOIN Al ON Al.AlID= AT.ALID
JOIN DProfile ON Al.DProfileID= DProfile.DProfileID
JOIN Label ON DProfile.LabelID = Label.LabelID
JOIN PricingTier ON PricingTier.LabelID = DProfile.LabelID
JOIN PricingTarget on PricingTarget.PricingTargetID = PricingTier.PricingTargetID
JOIN ATP ON ATP.PricingTierID = PricingTier.PricingTierID
WHERE PricingTarget.Target = 'AT'
AND PricingTier.MaxAgeInDays = 0
AND ATP.LengthMultiple = 0
AND AT.ATID > 408095
AND AT.ATID < 451199

Notice the part of the where clause in red. This is how I'm changing the number of rows that select count(*) should find.

Here's the output from showplan

|--Compute Scalar(DEFINE:([Expr1021]=CONVERT_IMPLICIT(int,[Expr1028],0)))
|--Stream Aggregate(DEFINE:([Expr1028]=Count(*)))
|--Hash Match(Inner Join, HASH:([MGN2].[dbo].[AL].[ALID])=([MGN2].[dbo].[AT].[ALID]))
|--Hash Match(Inner Join, HASH:([MGN2].[dbo].[DProfile].[DProfileID])=([MGN2].[dbo].[AL].[DProfileID]), RESIDUAL:([MGN2].[dbo].[DProfile].[DProfileID]=[MGN2].[dbo].[AL].[DProfileID]))
| |--Nested Loops(Inner Join, WHERE:([MGN2].[dbo].[PricingTier].[LabelID]=[MGN2].[dbo].[DProfile].[LabelID]))
| | |--Nested Loops(Inner Join, OUTER REFERENCES:([MGN2].[dbo].[PricingTier].[PricingTargetID]))
| | | |--Nested Loops(Inner Join, OUTER REFERENCES:([MGN2].[dbo].[ATP].[PricingTierID]))
| | | | |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[ATP].[PK_ATPID]), WHERE:([MGN2].[dbo].[ATP].[LengthMultiple]=(0)))
| | | | |--Clustered Index Seek(OBJECT:([MGN2].[dbo].[PricingTier].[PK_PricingTierID]), SEEK:([MGN2].[dbo].[PricingTier].[PricingTierID]=[MGN2].[dbo].[ATP].[PricingTierID]), WHERE:([MGN2].[dbo].[PricingTier].[MaxAgeInDays]=(0)) ORDERED FORWARD)
| | | |--Clustered Index Seek(OBJECT:([MGN2].[dbo].[PricingTarget].[PK_PricingTargetID]), SEEK:([MGN2].[dbo].[PricingTarget].[PricingTargetID]=[MGN2].[dbo].[PricingTier].[PricingTargetID]), WHERE:([MGN2].[dbo].[PricingTarget].[Target]='AT') ORDERED FORWARD)
| | |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[DProfile].[PK_D_Profiles]))
| |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[AL].[PK_AL]))
|--Clustered Index Seek(OBJECT:([MGN2].[dbo].[AT].[PK_Track]), SEEK:([MGN2].[dbo].[AT].[ATID] > (408095) AND [MGN2].[dbo].[AT].[ATID] < (451199)) ORDERED FORWARD)

Now, if I change the part of the where clause from 'AND AT.ATID > 408095' to 'AND AT.ATID > 408094', increase the number returned by one, the query goes from taking 1 second to 40 minutes. Here's the showplan text that is different, when the only difference in the query is changing the number in the where clause.

|--Compute Scalar(DEFINE:([Expr1021]=CONVERT_IMPLICIT(int,[Expr1028],0)))
|--Stream Aggregate(DEFINE:([Expr1028]=Count(*)))
|--Merge Join(Inner Join, MERGE:([MGN2].[dbo].[PricingTarget].[PricingTargetID])=([MGN2].[dbo].[PricingTier].[PricingTargetID]), RESIDUAL:([MGN2].[dbo].[PricingTarget].[PricingTargetID]=[MGN2].[dbo].[PricingTier].[PricingTargetID]))
|--Clustered Index Scan(OBJECT:([MGN2].[dbo].[PricingTarget].[PK_PricingTargetID]), WHERE:([MGN2].[dbo].[PricingTarget].[Target]='AT') ORDERED FORWARD)
|--Nested Loops(Inner Join, WHERE:([MGN2].[dbo].[AL].[ALID]=[MGN2].[dbo].[AT].[ALID]))
|--Nested Loops(Inner Join, WHERE:([MGN2].[dbo].[ATPrice].[PricingTierID]=[MGN2].[dbo].[PricingTier].[PricingTierID]))
| |--Sort(ORDER BY:([MGN2].[dbo].[PricingTier].[PricingTargetID] ASC))
| | |--Hash Match(Inner Join, HASH:([MGN2].[dbo].[DProfile].[DProfileID])=([MGN2].[dbo].[AL].[DProfileID]), RESIDUAL:([MGN2].[dbo].[DProfile].[DProfileID]=[MGN2].[dbo].[AL].[DProfileID]))
| | |--Nested Loops(Inner Join, WHERE:([MGN2].[dbo].[PricingTier].[LabelID]=[MGN2].[dbo].[DProfile].[LabelID]))
| | | |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[PricingTier].[PK_PricingTierID]), WHERE:([MGN2].[dbo].[PricingTier].[MaxAgeInDays]=(0)) ORDERED FORWARD)
| | | |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[DProfile].[PK_D_Profiles]))
| | |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[AL].[PK_AL]))
| |--Clustered Index Scan(OBJECT:([MGN2].[dbo].[ATPrice].[PK_ATPriceID]), WHERE:([MGN2].[dbo].[ATPrice].[LengthMultiple]=(0)))
|--Clustered Index Seek(OBJECT:([MGN2].[dbo].[AT].[PK_Track]), SEEK:([MGN2].[dbo].[AT].[ATID] > (408094) AND [MGN2].[dbo].[AT].[ATID] < (451199)) ORDERED FORWARD)

I tried increasing the size of the TempDB from the default of 8 MB to 1000 MB for both the files, but it behaves the exact same way. I'm at a loss as to what is causing this dramatic difference for such a simple query. Anyone have an idea?

Thanks!

By the way, this is on SQL Server 2005, if that helps any.|||

Jim:

Good post. I appreciate your inclusion of the plan and the research you put into your question. It appears to me that you are at a plan "crossover" point. What I mean is that you are at the point where the mere addition of one more record to the output range causes the query optimizer to choose a different query plan. In your plan I see many "clustered index scans". A cover index might be appropriate to improve the performance of your query. Can you post the indexes, keys and unique constraints of all of your tables?

What follows below is academic. I am not sure whether I need to include this or not because it looks like you have already done this with your testing, so it looks like you already have a good handle on this. If this is of no interest, just skip the rest.


Dave

I created a mock-up table with 32767 rows of data. One of the columns is a "testDate" column with an associated index. After creating the table I performed a number of select queries with SHOWPLAN_TEXT turned on so that I could discover the filter criteria at which the query plan switched from "INDEX SEEK" to "CLUSTERED INDEX SCAN". I found that when I switch the filter date from "1/31/7" to "1/30/7" that the plan switched. Therefore I ran these two queries with SHOWPLAN _TEXT turned on to illustrate the query plan "crossover" point. I think that your query has a similar issue. My main question to you is was it your intent to find the crossover or is this something that jumped up and bit you? OUCH!

if exists
( select 0 from sysobjects
where type = 'U'
and id = object_id ('dbo.crossoverTest')
)
drop table dbo.crossoverTest
go

create table dbo.crossoverTest
( rid integer
constraint pk_crossoverTest primary key,
filler char (200),
testDate datetime
)
go

create index testDate_ndx
on dbo.crossoverTest (testDate)
go

insert into dbo.crossoverTest
select iter,
'Filler',
dateadd (mi, -17*iter, convert(datetime, '2/3/7'))
from (
select 256*b.number + a.number as iter
from master.dbo.spt_values a (nolock)
inner join master.dbo.spt_values b (nolock)
on a.[name] is null
and b.[name] is null
and b.number <= 127
and a.number <= 255
and 256*b.number + a.number > 0
) as small_iterator

go

update statistics dbo.crossoverTest
go

set showplan_text on
go

select rid,
left (filler, 10) as Filler,
testDate
from crossoverTest
where testDate >= '1/31/7'

go

-- StmtText
-- --
-- |--Compute Scalar(DEFINE:([Expr1002]=substring(Convert([crossoverTest].[filler]), 1, 10)))
-- |--Bookmark Lookup(BOOKMARK:([Bmk1000]), OBJECT:([tempdb].[dbo].[crossoverTest]) WITH PREFETCH)
-- |--Index Seek(OBJECT:([tempdb].[dbo].[crossoverTest].[testDate_ndx]), SEEK:([crossoverTest].[testDate] >= 'Jan 31 2007 12:00AM') ORDERED FORWARD)

-- Table 'crossoverTest'. Scan count 1, logical reads 813, physical reads 0, read-ahead reads 0.

go

select rid,
left (filler, 10) as Filler,
testDate
from crossoverTest
where testDate >= '1/30/7'

go

-- StmtText
--
-- |--Compute Scalar(DEFINE:([Expr1002]=substring(Convert([crossoverTest].[filler]), 1, 10)))
-- |--Clustered Index Scan(OBJECT:([tempdb].[dbo].[crossoverTest].[pk_crossoverTest]), WHERE:([crossoverTest].[testDate]>='Jan 30 2007 12:00AM'))

-- Table 'crossoverTest'. Scan count 1, logical reads 913, physical reads 0, read-ahead reads 0.


go

set showplan_text off
go

|||

Thanks Dave,

Ultimately, a simple index was the issue. What perplexed me was just that the difference between the queries when it returned one more row was soooo dramatic. And to answer your question, I had just come upon the crossover issue through trial and error. I hadnt noticed the query plans were different until I got to the point that 1 row was the difference between success and failure.

Friday, March 23, 2012

Query from 2 XML data sources

Hi,
Suppose I have to XML documents. The first one contains a list of
authors and the second one contains a list of books. For example,
author.xml
--
<authors>
<author @.id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author @.id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author @.id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>
book.xml
--
<catalog>
<book>
<title>SQL Server for Dummies</title>
<author @.id="1">Tom</author>
</book>
<book>
<title>XQuery for Dummies</title>
<author @.id="2">Chris</author>
</book>
<book>
<title>It's All About XML</title>
<author @.id="3">Jack</author>
</book>
</catalog>
Suppose I store "author.xml" as xml data type in the AUTHOR table and
"book.xml" as xml data type in the BOOK table. I would like to get the
information of authors who have written a book with title containing
"XQuery".
How can I write XQuery in SQL Server 2005 ?
ThanksFirst - see also the item (in this forum) titled "Cross Joining two
XML columns, or two (or more) XML variables" (it also got renamed a
few times in the duration).
--
Well, firstly I wouldn't store them as entire documents. This data is
essentially in tabular form, so I would have an AUTHORS table (id,
name, tel) etc.
If I had to use xml (to store semi-structured data), then I would
probably start with AUTHORS (xml) (one row per author - which you can
do by shredding the original xml quite easily [shown]) -then add some
computed, stored, indexed columns. This means that in the database you
have access to a readonly column that is the "@.id" - this allows you
to join very efficiently between the tables, by using the pre-computed
column on the destination, not an xml query - i.e.
DROP TABLE AUTHORS
DROP FUNCTION GetAuthorId
GO
CREATE TABLE AUTHORS ([Content] xml NOT NULL)
GO
CREATE FUNCTION [dbo].GetAuthorId(@.Content xml) RETURNS int
WITH SCHEMABINDING
BEGIN
DECLARE @.val varchar(50)
SET @.val = @.Content.value('(/author/@.id)[1]','varchar(50)')
RETURN CASE ISNUMERIC(@.val) WHEN 1 THEN @.val ELSE NULL END
END
GO
ALTER TABLE AUTHORS ADD [Id]
AS [dbo].GetAuthorId([Content]) PERSISTED
GO
CREATE NONCLUSTERED INDEX IDX_AuthorId
ON AUTHORS([Id])
GO
DECLARE @.xml xml
SET @.xml = '
<authors>
<author id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>'
INSERT AUTHORS ([Content])
SELECT tmp.x.query('.')
FROM @.xml.nodes('/authors/author') tmp(x)
SELECT * FROM AUTHORS
--
Now you can join to the stored and indexed AUTHORS.Id column.
Marc|||I think the gist of the discussion is that you basically have to shred the 2
documents to relational format and join using SQL. If, however, all of the
information were stored in a single document you could simulate an inner
join using a FLWOR expression.
1. If you put two "$var in path_expression" expressions in the for clause
separated by a comma, XQuery generates the cross join of the two path
expressions.
2. If you add a where clause to limit the results to those where your id
attribute is equal in both path expressions you have an inner join.
3. If you "and" another predicate to the where clause limiting the results
to those that contain the word "XQuery", you get author #2, "Chris".
In the example below I've combined your documents, removed the invalid "@."
character from the attribute names, and entitised the apostrophe in your
data. The FLWOR expression that follows generates the results. Of course all
this is dependent on the two documents being combined into a single doc.
DECLARE @.xml XML;
SELECT @.xml = '<authors>
<author id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>
<catalog>
<book>
<title>SQL Server for Dummies</title>
<author id="1">Tom</author>
</book>
<book>
<title>XQuery for Dummies</title>
<author id="2">Chris</author>
</book>
<book>
<title>It's All About XML</title>
<author id="3">Jack</author>
</book>
</catalog>';
SELECT @.xml.query('for $a in /authors/author, $b in /catalog/book
where $a/@.id = $b/author/@.id
and $b/title[contains(., "XQuery")]
return $a');
<Paolo.sembung@.gmail.com> wrote in message
news:060ec030-adba-4ed3-bb99-c24e75fdee34@.y5g2000hsf.googlegroups.com...
> Hi,
> Suppose I have to XML documents. The first one contains a list of
> authors and the second one contains a list of books. For example,
> author.xml
> --
> <authors>
> <author @.id="1">
> <name>Tom</name>
> <tel>001-909-1129</tel>
> </author>
> <author @.id="2">
> <name>Chris</name>
> <tel>001-871-1341</tel>
> </author>
> <author @.id="3">
> <name>Jack</name>
> <tel>001-452-8721</tel>
> </author>
> </authors>
> book.xml
> --
> <catalog>
> <book>
> <title>SQL Server for Dummies</title>
> <author @.id="1">Tom</author>
> </book>
> <book>
> <title>XQuery for Dummies</title>
> <author @.id="2">Chris</author>
> </book>
> <book>
> <title>It's All About XML</title>
> <author @.id="3">Jack</author>
> </book>
> </catalog>
> Suppose I store "author.xml" as xml data type in the AUTHOR table and
> "book.xml" as xml data type in the BOOK table. I would like to get the
> information of authors who have written a book with title containing
> "XQuery".
> How can I write XQuery in SQL Server 2005 ?
> Thanks

Query from 2 XML data sources

Hi,
Suppose I have to XML documents. The first one contains a list of
authors and the second one contains a list of books. For example,
author.xml
<authors>
<author @.id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author @.id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author @.id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>
book.xml
<catalog>
<book>
<title>SQL Server for Dummies</title>
<author @.id="1">Tom</author>
</book>
<book>
<title>XQuery for Dummies</title>
<author @.id="2">Chris</author>
</book>
<book>
<title>It's All About XML</title>
<author @.id="3">Jack</author>
</book>
</catalog>
Suppose I store "author.xml" as xml data type in the AUTHOR table and
"book.xml" as xml data type in the BOOK table. I would like to get the
information of authors who have written a book with title containing
"XQuery".
How can I write XQuery in SQL Server 2005 ?
Thanks
First - see also the item (in this forum) titled "Cross Joining two
XML columns, or two (or more) XML variables" (it also got renamed a
few times in the duration).
Well, firstly I wouldn't store them as entire documents. This data is
essentially in tabular form, so I would have an AUTHORS table (id,
name, tel) etc.
If I had to use xml (to store semi-structured data), then I would
probably start with AUTHORS (xml) (one row per author - which you can
do by shredding the original xml quite easily [shown]) -then add some
computed, stored, indexed columns. This means that in the database you
have access to a readonly column that is the "@.id" - this allows you
to join very efficiently between the tables, by using the pre-computed
column on the destination, not an xml query - i.e.
DROP TABLE AUTHORS
DROP FUNCTION GetAuthorId
GO
CREATE TABLE AUTHORS ([Content] xml NOT NULL)
GO
CREATE FUNCTION [dbo].GetAuthorId(@.Content xml) RETURNS int
WITH SCHEMABINDING
BEGIN
DECLARE @.val varchar(50)
SET @.val = @.Content.value('(/author/@.id)[1]','varchar(50)')
RETURN CASE ISNUMERIC(@.val) WHEN 1 THEN @.val ELSE NULL END
END
GO
ALTER TABLE AUTHORS ADD [Id]
AS [dbo].GetAuthorId([Content]) PERSISTED
GO
CREATE NONCLUSTERED INDEX IDX_AuthorId
ON AUTHORS([Id])
GO
DECLARE @.xml xml
SET @.xml = '
<authors>
<author id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>'
INSERT AUTHORS ([Content])
SELECT tmp.x.query('.')
FROM @.xml.nodes('/authors/author') tmp(x)
SELECT * FROM AUTHORS
Now you can join to the stored and indexed AUTHORS.Id column.
Marc
|||I think the gist of the discussion is that you basically have to shred the 2
documents to relational format and join using SQL. If, however, all of the
information were stored in a single document you could simulate an inner
join using a FLWOR expression.
1. If you put two "$var in path_expression" expressions in the for clause
separated by a comma, XQuery generates the cross join of the two path
expressions.
2. If you add a where clause to limit the results to those where your id
attribute is equal in both path expressions you have an inner join.
3. If you "and" another predicate to the where clause limiting the results
to those that contain the word "XQuery", you get author #2, "Chris".
In the example below I've combined your documents, removed the invalid "@."
character from the attribute names, and entitised the apostrophe in your
data. The FLWOR expression that follows generates the results. Of course all
this is dependent on the two documents being combined into a single doc.
DECLARE @.xml XML;
SELECT @.xml = '<authors>
<author id="1">
<name>Tom</name>
<tel>001-909-1129</tel>
</author>
<author id="2">
<name>Chris</name>
<tel>001-871-1341</tel>
</author>
<author id="3">
<name>Jack</name>
<tel>001-452-8721</tel>
</author>
</authors>
<catalog>
<book>
<title>SQL Server for Dummies</title>
<author id="1">Tom</author>
</book>
<book>
<title>XQuery for Dummies</title>
<author id="2">Chris</author>
</book>
<book>
<title>It's All About XML</title>
<author id="3">Jack</author>
</book>
</catalog>';
SELECT @.xml.query('for $a in /authors/author, $b in /catalog/book
where $a/@.id = $b/author/@.id
and $b/title[contains(., "XQuery")]
return $a');
<Paolo.sembung@.gmail.com> wrote in message
news:060ec030-adba-4ed3-bb99-c24e75fdee34@.y5g2000hsf.googlegroups.com...
> Hi,
> Suppose I have to XML documents. The first one contains a list of
> authors and the second one contains a list of books. For example,
> author.xml
> --
> <authors>
> <author @.id="1">
> <name>Tom</name>
> <tel>001-909-1129</tel>
> </author>
> <author @.id="2">
> <name>Chris</name>
> <tel>001-871-1341</tel>
> </author>
> <author @.id="3">
> <name>Jack</name>
> <tel>001-452-8721</tel>
> </author>
> </authors>
> book.xml
> --
> <catalog>
> <book>
> <title>SQL Server for Dummies</title>
> <author @.id="1">Tom</author>
> </book>
> <book>
> <title>XQuery for Dummies</title>
> <author @.id="2">Chris</author>
> </book>
> <book>
> <title>It's All About XML</title>
> <author @.id="3">Jack</author>
> </book>
> </catalog>
> Suppose I store "author.xml" as xml data type in the AUTHOR table and
> "book.xml" as xml data type in the BOOK table. I would like to get the
> information of authors who have written a book with title containing
> "XQuery".
> How can I write XQuery in SQL Server 2005 ?
> Thanks

Wednesday, March 7, 2012

query contains concatenated inner results

I have a table item(id, item_name, item_added_date, item_price).
The second table is item_viewedby(id, item_id, userid).
The second table holds the users who are viewing a certain item.
So data in first table item is like:
id item_name item_added_date item_price
1 toothbrush 01/01/2003 100
2 toothpaste 02/02/2003 125
And data in second table item_viewedby is like:
id item_id userid
1 1 jane
2 1 john
3 1 kim
4 2 jim
5 2 tim
I want to execute a query so that for each item I have a single row of
item_id item_price item_viewed_by
Where item_viewed_by is a semi_colon separated list of users who are
viewing a certain item.
So my result table is like
item_id item_price item_viewed_by
1 100 jane;john;kim
2 125 jim;tim
I am wanting to put this query in a stored procedure and call it from
my ASP.NET code.
If this is to be done using cursors then please help me with that, too
Thanks,
FredSee the following example which will give you a fair idea how can you
convert rows to columns.
Ex:
create table tab(ID int,
cats varchar(50))
go
insert into tab values(1 ,'jane')
insert into tab values(1 ,'john')
insert into tab values(1 ,'kim')
insert into tab values(2 ,'nick')
insert into tab values(3 ,'jim')
insert into tab values(3 ,'tim')
go
drop table #tmp
create table #tmp(id int, tmpval varchar(50))
go
declare @.id int, @.old_id int
declare @.seq_num int
declare @.cats varchar(50), @.f_cats varchar(50)
select @.id=0, @.old_id=0,@.seq_num=0, @.cats='', @.f_cats=''
declare c1 cursor for
select id, cats from tab order by id
open c1
fetch c1 into @.id,@.cats
while @.@.fetch_status = 0
begin
If @.old_id <> @.id and @.old_id <> 0
begin
insert into #tmp values(@.old_id, @.f_cats)
select @.seq_num=0, @.f_cats=''
end
select @.f_cats = @.f_cats + case @.f_cats when '' then '' else ',' end + @.cats
select @.old_id = @.id
fetch c1 into @.id,@.cats
end
close c1
deallocate c1
insert into #tmp values(@.old_id, @.f_cats)
select * from #tmp
--
-Vishal
"Fred" <fredg1232003@.yahoo.com> wrote in message
news:2022fcea.0307281714.3529f2f@.posting.google.com...
> I have a table item(id, item_name, item_added_date, item_price).
> The second table is item_viewedby(id, item_id, userid).
> The second table holds the users who are viewing a certain item.
> So data in first table item is like:
> id item_name item_added_date item_price
> 1 toothbrush 01/01/2003 100
> 2 toothpaste 02/02/2003 125
> And data in second table item_viewedby is like:
> id item_id userid
> 1 1 jane
> 2 1 john
> 3 1 kim
> 4 2 jim
> 5 2 tim
> I want to execute a query so that for each item I have a single row of
> item_id item_price item_viewed_by
> Where item_viewed_by is a semi_colon separated list of users who are
> viewing a certain item.
> So my result table is like
> item_id item_price item_viewed_by
> 1 100 jane;john;kim
> 2 125 jim;tim
> I am wanting to put this query in a stored procedure and call it from
> my ASP.NET code.
> If this is to be done using cursors then please help me with that, too
> Thanks,
> Fred|||yes you can review it..........
if you get any issues just inform.
regards
Hari Sharma, India
Wangkhar@.yahoo.com (WangKhar) wrote in message news:<bb269444.0308040106.63f76743@.posting.google.com>...
> hkvats - I took the liberty of reviewing your code - hope you dont mind.|||This type of concatenation over rows is
very easy with the RAC utility for S2k.
No coding required.
Check out:
http://www.rac4sql.net/onlinehelp.asp?topic=236
RAC v2.2 and QALite released.
www.rac4sql.net

Monday, February 20, 2012

Query and process performace with incremental update

hello all,

we are working on a project with a large scale of data (around 1000 rows per second).
we built a cube on this fact table.

this table will hold at most 90M rows.

we need the data in the cube to be "real time", that mean, up to date.

we are doing it by proactive caching- incremental update.

we also need a very good query performance.

that's why the storage mode is set to MOLAP.

we still get a low performace from the cube process and and the querys.

any suggestions how to solve this issues?

Thanks in advance,

Shy Engelberg - Certagon.

You might be seeing the results of the meta data locking (see http://geekswithblogs.net/darrengosbell/archive/2007/04/24/SSAS-Processing-ForceCommitTimeout-and-quotthe-operation-has-been-cancelledquot.aspx) If you are processing the cube very frequently. You probably need to profile the server to gather as much information as you can to figure out where the issues are.

Is it on the source system - selected only new records?

Is the system CPU, IO or memory bound?

Are you using partitions to isolate the processing to a smaller subset of the data?

|||

The SSAS 2005 Performance Guide is a good reference for these kinds of issues ( http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc).

I agree with Darren's idea of identifying whether the problem is occuring in retreiving source data records or in assembling the MOLAP structures. And partitioning may also be beneficial if you can isolate updates to a smaller partition.

You may also want to consider using HOLAP. HOLAP will give you excellent query performance for most queries with shorter processing times.

Bryan