You are on page 1of 2

Steps To Check In BW Performance Tuning

Indices With an increasing number of data records in the InfoCube, not only the load but also the query performance can be reduced. This is attributed to the increasing demands on the system for maintaining indexes. The indexes that are created in the fact table for each dimension allow you to easily find and select the data. Partitioning By using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube. Aggregates Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates serve, in a similar way to database indexes, to improve performance. Compressing the Infocube Infocube compression means aggregation of the data ignoring the request ids. After compression, the system need not perform aggregation using the request ID every time you execute a query. Basesd on these you may have doughts that! - Compare and contrast the above techniques? - Are all of the the above techniques are to improve the query performance? - What techniques do we follow to improve the dataload performance? For all these doughts! Yes, the creation of indices shud be done after loading, because just like a book index, and aggregates improves the query perfo becaz, you can observe at the query execution time when it is above to give you the output for first time, the OLAP processor takes much time to calculate, but for the next time it will be faster.... In what ways and what combinations should they be implemented in a project? It means. In project depending upon the client requirement, if the reports are running slow, losding slow, .......for all types of issues, we need to study, by maintaining the statistical information, by using tcodes and procedures and tables like, RSDDSTAT, st22, db02... and then we need to analyse the issue and follow the techinques required. Basically the Following are the points to be kept in mind to improve loading performance. 1. When you are extracting data from source system using PSA Transfer method: Using PSA adn Datat target parallel ---- for faster loading Using Only PSA & update subsequent data target -----reduces the burden on the server

2. Data packet Size: When extracting data from source system to BW, we use data packets. As per SAP standard, we prefer to have 50,000 records per one data packet. For every data packet, it does commit & save --- so less no. of data packets required. If you have 1 lakh records per data packet and there is an error in the last record, the entire packet gets failed ---3.In a project, we have millions of records to extract from different modules to BW. All loads will be running in the background for every 1 or two hours aproximately which will be handled by workprocess. We need to make sure that the work process is neither over utilized not under utilized. 4. Drop index of a cube before loading 5. Distribute work load among multiple server instances 6. Prefer delta load: as it loads only newly added or modified records. 7. We should deploy parellism. Multiple Info packages should be run simultaneously. 8. Update routines and transfer routines should be avoided unless necessary. And the routine should be a optimized code. 9. We should prefer to laod master data and then transaction data because when u load master data, SID is generated and this SID is used in Transaction data. This is all about my celar Picture of Performance Issues!

You might also like