hello all,
we are working on a project with a large scale of data (around 1000 rows per second).
we built a cube on this fact table.
this table will hold at most 90M rows.
we need the data in the cube to be "real time", that mean, up to date.
we are doing it by proactive caching- incremental update.
we also need a very good query performance.
that's why the storage mode is set to MOLAP.
we still get a low performace from the cube process and and the querys.
any suggestions how to solve this issues?
Thanks in advance,
Shy Engelberg - Certagon.
You might be seeing the results of the meta data locking (see http://geekswithblogs.net/darrengosbell/archive/2007/04/24/SSAS-Processing-ForceCommitTimeout-and-quotthe-operation-has-been-cancelledquot.aspx) If you are processing the cube very frequently. You probably need to profile the server to gather as much information as you can to figure out where the issues are.
Is it on the source system - selected only new records?
Is the system CPU, IO or memory bound?
Are you using partitions to isolate the processing to a smaller subset of the data?
|||The SSAS 2005 Performance Guide is a good reference for these kinds of issues ( http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SSAS2005PerfGuide.doc).
I agree with Darren's idea of identifying whether the problem is occuring in retreiving source data records or in assembling the MOLAP structures. And partitioning may also be beneficial if you can isolate updates to a smaller partition.
You may also want to consider using HOLAP. HOLAP will give you excellent query performance for most queries with shorter processing times.
Bryan
No comments:
Post a Comment