You are on page 1of 53

200 BW Questions and Answers for INTERVIEWS

Doc text:
1) Please describe your experience with BEx (Business Explorer)
A) Rate your level of experience with BEx and the rationale for youre self-rating
B) How many queries have you developed? : 5 to 6 Queries
C) How many reports have you written?
D) How many workbooks have you developed?
E) Experience with jump targets (OLTP, use jump target) RSBBS
F) Describe experience with BW-compatible ETL tools (e.g. Ascential)
2) Describe your experience with 3rd party report tools (Crystal Decisions, Business
Objects a plus)
3) Describe your experience with the design and implementation of standard & custom
InfoCubes.
1. How many InfoCubes have you implemented from start to end by yourself (not with a
team)?
2. Of these Cubes, how many characteristics (including attributes) did the largest one
have.
3. How much customization was done on the InfoCubes have you implemented?
4) Describe your experience with requirements definition/gathering.
5) What experience have you had creating Functional and Technical specifications?
6) Describe any testing experience you have:
7) Describe your experience with BW extractors
1. How many standard BW extractors have you implemented?
2. How many custom BW extractors have you implemented?
8) Describe how you have used Excel as a compliment to BEx
A) Describe your level of expertise and the rationale for your self-rating (experience with
macros, pivot tables and formatting)
B)

9) Describe experience with ABAP


10) Describe any hands on experience with ASAP Methodology.
11) Identify SAP functional areas (SEM, CRM, etc.) you have experience in. Describe
that experience.
12) What is partitioning and what are the benefits of partitioning in an InfoCube?
A) Partitioning is the method of dividing a table (either column wise or row wise) based
on the fields available which would enable a quick reference for the intended values of
the fields in the table. By partitioning an infocube, the reporting performance is
enhanced because it is easier to search in smaller tables. Also table maintenance
becomes easier.
13) What does Rollup do?
A) Rollup creates aggregates in an infocube whenever new data is loaded.
14) What are the inputs for an infoset?
A) The inputs for an infoset are ODS objects and InfoObjects (with master data or text).
15) What internally happens when BW objects like Info Object, Info Cube or ODS are
created and activated?
A) When an InfoObject, InfoCube or ODS object is created, BW maintains a saved
version of that object but does not make it available for use. Once the object is
activated, BW creates an active version that is available for use.
16) What is the maximum number of key fields that you can have in an ODS object?
A) 16.
17) What is the specific advantage of LO extraction over LIS extraction?
A) The load performance of LO extraction is better than that of LIS. In LIS two tables are
used for delta management that is cumbersome. In LO only one delta queue is used for
delta management.
18) What is the importance of 0REQUID?
A) It is the InfoObject for Request id. OREQUID enables BW to distinguish between
different data records.
19) Can you add programs in the scheduler?
A) Yes. Through event handling.

20) What is the importance of the table ROIDOCPRMS?


A) It is an IDOC parameter source system. This table contains the details of the data
transfer like the source system of the data, data packet size, maximum number of lines
in a data packet, etc. The data packet size can be changed through the control
parameters option on SBIW i.e., the contents of this table can be changed.
21) What is the importance of 'start routine' in update rules?
A) A Start routine is a user exit that can be executed before the update rule starts to
allow more complex computations for a key figure or a characteristic. The start routine
has no return value. Its purpose is to execute preliminary calculations and to store
them in a global data structure. You can access this structure or table in the other
routines.
22) When is IDOC data transfer used?
A) IDOCs are used for communication between logical systems like SAP R/3, R/2 and
non-SAP systems using ALE and for communication between an SAP R/3 system and a
non-SAP system. In BW, an IDOC is a data container for data exchange between SAP
systems or between SAP systems and external systems based on an EDI interface.
IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data
into PSA since data there is more detailed. It is used when the file size is lesser than
1000 bytes.
23) What is partitioning characteristic in CO-PA used for?
A) For easier parallel search and load of data.
24) What is the advantage of BW reporting on CO-PA data compared with directly
running the queries on CO-PA?
A) BW has a better performance advantage over reporting in R/3. For a huge amount of
data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an
OLTP system and is good for transaction processing rather than analytical processing.
25) What is the function of BW statistics cube?
A) BW statistics cube contains the data related to the reporting performance and the
data loads of all the InfoCubes in the BW system.
26) When an ODS is in 'overwrite' mode, does uploading the same data again and again
create new entries in the change log each time data is uploaded?
A) No.
27) What is the function of 'selective deletion' tab in the manage->contents of an
infocube?

A) It allows us to select a particular value of a particular field and delete its contents.
28) When we collapse an infocube, is the consolidated data stored in the same infocube
or is it stored in the new infocube?
A) Data is stored in the same cube.
29) What is the effect of aggregation on the performance? Are there any negative effects
on the performance?
A) Aggregation improves the performance in reporting.
30) What happens when you load transaction data without loading master data?
A) The transaction data gets loaded and the master data fields remain blank.
31) When given a choice between a single infocube and multiple InfoCubes with a
multiprovider, what factors does one need to consider before making a decision?
A) One would have to see if the InfoCubes are used individually. If these cubes are often
used individually, then it is better to go for a multiprovider with many cubes since the
reporting would be faster for an individual cube query rather than for a big cube with
lot of data.
32) How many hierarchy levels can be created for a characteristic info object?
A) Maximum of 98 levels.
33) What is open hub service?
A) The open hub service enables you to distribute data from an SAP BW system into
external data marts, analytical applications, and other applications. With this, you can
ensure controlled distribution using several systems. The central object for the export of
data is the Infospoke. Using this, you can define the object from which the data comes
and into which target it is transferred. Through the open hub service, SAP BW becomes
a hub of an enterprise data warehouse. The distribution of data becomes clear through
central monitoring from the distribution status in the BW system.
34) What is the function of 'reconstruction' tab in an infocube?
A) It reconstructs the deleted requests from the infocube. If a request has been deleted
and later someone wants the data records of that request to be added to the infocube,
one can use the reconstruction tab to add those records. It goes to the PSA and brings
the data to the infocube.
35) What are secondary indexes with respect to InfoCubes?
A) Index created in addition to the primary index of the infocube. When you activate a

table in the ABAP Dictionary, an index is created on the primary key fields of the table.
Further indexes created for the table are called secondary indexes.
36) What is DB connect and where is it used?
A) DB connect is database connecting piece of program. It is used in connecting third
party tools with BW for reporting purpose.
37) Can we extract hierarchies from R/3 for CO-PA?
A) No We cannot, NO hierarchies in CO/PA.
38) Explain field name for partitioning in CO-PA
A) The CO/PA partitioning is used to decrease package size (eg: company code)
39) what is V3 update method?
A) It is a program in R/3 source system that schedules batch jobs to update extract
structure to data source collectively.
40) Differences between serialized and non-serialized V3 updates
41) What is the common method of finding the tables used in any R/3 extraction
A) By using the transaction LISTSCHEMA we can navigate the tables.
42) Differences between table view and infoset query
A) An InfoSet Query is a query using flat tables.
43) How to load data from one InfoCube to another InfoCube ?
A) Thro DataMarts data can be loaded from one InfoCube to another InfoCube.
44) What is the significance of setup tables in LO extractions ?
A) It adds the Selection Criteria to the LO extraction.
45) Difference between extract structure and datasource
A) In Datasource we define the data from diff source sys,where as in extract struct it
contains the replicated data of datasource n where in we can define extract rules, n
transfer rules
B) Extract Structure is a record layout of InfoObjects.
C) Extract Structure is created on SAP BW system.
46) What happens internally when Delta is Initialized

47) What is referential integrity mechanism ?


A) Referential integrity is the property that guarantees that values from one column
depend on values from another column. This property is enforced through integrity
constraints.
48) What is activation of extract structure in LO ?
49) What is the difference between Info IDoc and data IDoc ?
50) What is D-Management in LO ?
A) It is a method used in delta update methods, which is based on change log in LO.
51) What is entity relationship model in data modeling ?
A) An ERD(Entity Relation Diagram) that can be used to generate a physical database.
B) It is an high level data model.
C) It is a schematic that shows all the entities within the scope of integration and the
direct relationship between the entities.
52) What is the difference between direct delta and queued delta updates in LO ?
53) What is non-cumulative infocube ?
54) What kind of tools are available to monitor the overall Query Performance?
55) How can we have a delta update for generic data source ?
56) What are the methods available to debug the load failures ?
57) What is datamining concept ?
A) Process of finding hidden patterns and relationships in the data.
B) With typical data analysis requirements fulfilled by data warehouses,business users
have an idea of what information they want to see.
C) Some opportunities embody data discovery requirements,where the business user
wants to correlate sets of data to determine anomalies or patterns in the data.
58) What is scoring ?
59) Usage of Geo-coordinates ?
A) The georelevant data can be displayed and evaluated on a map with the help of the
BEx Map.
60) What are the different query areas related to Infoset ?
A) Jump queries,ODS queries areas are related to InfoSet
61) How does the time dependency works for BW objects ?
A) Time Dependent attributes have values that are valid for a specific range of dates(i.e
valid period).
62) What is I_ISOURCE?
A) Name of the InfoSource

63) What is I_T_FIELDS?


A) List of the transfer structure fields. Only these fields are actually filled in the data
table and can be sensibly addressed in the program.
64) What is C_T_DATA?
A) Table with the data received from the API in the format of source structure entered in
table ROIS (field ROIS-STRUCTURE).
65) What is I_UPDMODE?
A) Transfer mode as requested in the Scheduler of the Business Information Warehouse.
Not normally required.
66) What is I_T_SELECT?
A) Table with the selection criteria stored in the Scheduler of the SAP-Business
Information Warehouse. This is not normally required.
67) What is Serialized V3 Update?
A) This is the normal update method. Here, document data is collected in the order it
was created and transferred into the BW as a batch job. The transfer sequence is not
the same as the order in which the data was created in all scenarios.
68) What is Direct Delta?
A) In this method, extraction data is transferred directly from document postings into
the BW delta queue. The transfer sequence is the same as the order in which the data
was created.
69) What is Queued Delta?
A) In this method, extraction data from document postings is collected in an extraction
queue, from which a periodic collective run is used to transfer the data into the BW
delta queue. The transfer sequence is the same as the order in which the data was
created.
70) What is Unserialized V3 Update?
A) This method is almost exactly identical to the serialized update method. The only
difference is that the order of document data in the BW delta queue does not have to be
the same as the order in which it was posted. We only recommend this method when
the order in which the data is transferred is not important, a consequence of the data
target design in the BW.
71) What are the different Update Modes?
A) Serialized V3 Update
B) Direct Delta
C) Queued Delta
D) Unserialized V3 Update
72) What are the different ways Data Transfer?
A) Complete Update: All the data from the information structure us transferred
according to the selection criteria defined in the scheduler in the SAP BW.

B) Delta Update: Only the data that has been changed or is new since the last update is
transferred. To use this option, you must activate the delta update.
73) What is the major importance for the usage of ODS Object?
A) ODS is majorly used as a staging area.
74) What is the benefit of using BW reporting over SAP Reporting?
A) Performance
B) Data Analysis
C) Better front end reporting.
D) Ability to pull the data from SAP and Non - SAP sources.
75) Differences between star and extended star schema ?
A) Star schema: Only characteristics of the dimension tables can be used to access
facts. No structured drill downs can be created. Support for many languages is
difficult.
B) Extended starschema: Master data tables and their associated fields(attributes).
External hierarchy tables for structured access to data. Text tables with extensive
multilingual descriptions.
76) What are the new features of SAP BW 30b?
77) What are the new features of the R3 Plugin PI2002_1.
78) What are the major errors in BW and R3 pertaining to BW?
A) Errors in loading data (ODS loading, Cube loading, delta loading etc)
B) Errors in activating BW or other objects.
C) Issues in delta loadings
79) When are tables created in BW?
A) when the objects are activated, the tables are created. The location depends on the
Basis installation.
80) What is a start routine and return table, how do they synchronize with each other?
A) Start routine is used at update rules and return table is used to return the Value
following the execution of start routine
81) What is the difference between start routine and update routine, when, how and
why are they called?
A) Start routine can be used to access INFOPACKAGE, update routines cant.
82) What are the different Non - R/3 systems that BW supports?
83) In a general project, how many InfoCubes, InfoObjects, InfoSources, Multi-Providers
can you expect?
A) It depends on size of the project inturn their business goal.Differs from project to
project.

84) What does a M table signify?


A) Master table.
85) What does a F table signify?
A) Fact table
86) What is data warehousing?
A) Data Warehousing is a concept in which the data is stored and analysis is performed
over it.
87) What is process chain and how you used it?
A) Process chains are tool available in BW for Automation of upload of master data and
transaction data while taking care of dependency between each processes.
B) In one of our scenario we wanted to upload wholesale price infoobject which will have
wholesale price for all the material. Then we wanted to load transaction data. While
loading transaction data to populate wholesale price, there was a look up in the update
rule on this InfoObject masterdata table. This dependency of first uploading masterdata
and then uploading transaction data was done through the process chain.
88) What are Remotecubes and how you accessed and used it in your project?
A) A RemoteCube is an InfoCube whose transaction data is not managed in the
Business Information Warehouse but externally. Only the structure of the RemoteCube
is defined in BW. The data is read for reporting using a BAPI from another system.
B) Using a RemoteCube, you can carry out reporting using data in external systems
without having to physically store transaction data in BW. You can, for example, include
an external system from market data providers using a RemoteCube.
89) Hope you have worked on enhancements and on which userexit you worked can you
explain?
A) Extended the Data source 0MATERIAL_ATTR , 0PLANT_ATTR, 0MAT_PLANT_ATTR
for Master Data load from R/3 to BW. Edited User exit EXIT_SAPLRSAP_002 to populate
Master Data for extended fields and EXIT_SAPLRSAP_001 for transaction data to extract
from R/3 to BW
90) What is the t-code for generic extractor?
A) RSO2
91) What is infoset query?
A) InfoSet is special kind of InfoProvider. It is used to report by Joining ODS Objects and
InfoObjects. InfoSets have been used in the Business Information Warehouse for
InfoObjects (master data), ODS objects, and joins for these objects. The InfoSet Query
can be used to carry out tabular (flat) Reporting on these InfoSets.
92) What is the purpose of aggregates?
A) Aggregates are like indices to database tables. They are rolled up data on few
characteristics on which report is run frequently. They are created for performance
improvement of reporting. If a report is used very extensively and its performance is
slow then we can create aggregate on the characteristics used in the report, so that

when the report runs OLAP processer selects data from aggregate instead of cube.
93) How you did Datamodeling in your project? Explain
A) We had collected data from the user and created HLD(High level Design document)
and we analyzed to find the source for the data. Then datamodels were done indicating
dataflow, lookups. While designing the datamodel considerations were given to use
existing objects(like ODS and Cube) not storing redundant data, volume of data, Batch
dependency.
94) As you said you have worked on Cubes and ODS,Which one is better suited for
reporting? Explain and what are the drawbacks n benefits of each one
A) Cubes are best for reporting to queries. It runs faster. In ODS we can have only
simple reports. If we query based on Nonkey fields(Data fields) in ODS then, report runs
slower. But in ODS we can overwrite, non key fields. But we can not overwrite in Cube.
This is one of the disadvantage in Cube.
95) What are the different cubes you worked in FI?
A) Please look at Business content cubes and BW documentation on them to answer
this question.
96) What is delta upload? What is the use of delta upload? Data that has been changed
or added is extractor or full data is extractor?
A) When transactional data is pulled from R3 system instead of pulling all the data
daily(Instead of having full load), if we pull only the changed records, or newly added
records, the load on the system will be very less. So where ever it is possible we have to
go for delta load than full load.
97) What are hierarchies? Explain how you used in your project?
A) Hierarchies are organizing data in a structured way. For example BOM(Bill of
material) can be configured as hierarchies.
98) What is t-code for CO-PA?
A) KEB0
99) What is SID? what is the impact in using SID?
A) In BW the information is stored as SIDs. SIDs are Auto generated number assigned
to each characteristic value when they are uploaded. Search on Numeric character is
always faster than Alpha characters and hence SIDs are assigned for each characteristic
values.
100) What is Table partitioning? What are Return Tables?
A) If we have 0Calmonth or 0Fiscper as time characteristic, then we can partition the
fact table physically. Table portioning has to be supported by the Database. Oracle,
Informix, IBM DB2/390 supports table partitioning. SAP DB, Microsoft SQL Server IBM
DB2/400 does not support table portioning. Table partitioning helps to run the report
faster as data is stored in the relevant partition.
B) In Update rule routine, If we want to return multiple records, instead of single value,

we can use this return table.


101) What is the t-code for Query Monitor?
A) RSRT
102) Apart from R/3 ,which legacy db you used for extraction ?
A) We had legacy system called CAM. CAM system had Open order information which
was full load every day to OM Schedule line ODS. CAM system was connected to R3
through DB connect.
103) What are the three ODS Objects table explain?
A) ODS Object has three tables called New, Active and Change log. As soon as new data
comes into ODS, that is stored in ODS. When it is activated, the new data is written to
Active table. Change is written in the change log.
104) Can you explain about Start routines how you used in your project give me an
example?
A) In start routine is used for mass processing of records. In start routine all the
records of data package is available for processing. So we can process all these records
together in start routine. In one of scenario, we wanted to apply size % to the forecast
data. For example if material M1 is forecasted to say 100 nos in May. Then after
applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to
have 4 records against one single record that is coming in the info package. This is
achieved in start routine.
105) In update rules for an infocube we can specify separate update rules for
characteristics of each of the key figures. In which situations is the above used?
A) To be discussed(TBD).
106) Other than BW, what are the other ETL tools used for SAP R/3 in industry?
A) Informatica, ACTA, COGNOS, Business Objects are other ETL tools.
107) Does any other ERP software use BW for data warehousing.
A) NO.
108) What is the importance of hierarchies?
A) One can display the elements of characteristics in hierarchy form and evaluate query
data for the individual hierarchy levels in the Business Explorer (in Web applications or
in the BEx Analyzer).
109) Where is 0RECORDMODE infoobject used?
A) It is used in Delta Management. ODS uses ORECORDMODE info object for delta
load. ORECORDMODE has values as X,D,R. In delta data load X means rows to be
skipped, D & R for delete and Remove of rows.
110) What is operating concern in CO-PA?
A) An organizational structure that combines controlling areas together in the same way
as controlling areas group companies together.

111) Does all the characteristics present in ODS, are key fields.
A) No. An ODS object contains key fields (for example, document number/item) and
data fields that can also contain character fields (for example, order status, customer).
112) What is the use BAPI, ALE?
A) BAPI, ALE => set of programs which will Extract data from data sources. BW
connects SAP systems(R/3 or BW) and flat files via ALE. BW connects with non SAP
systems via BAPI.
113) What is the importance of Compounding of infoobjects?
A) A Compound attribute differentiates a characteristic to make the characteristic
uniquely identifiable. For example, in a Plant, there can be some similar products
manufactured. (Plant A-- Soap,Paste,Lotion; plant B--Soap, paste, Lotion) In this case
Plant A and Plant B should be made unique. So the characteristics can be compounded
to make them unique.
114) Are there any limitations for BEx analyzer?
A) TBD
115) How does BEx analyzer connect to BW?
A) Bex Analyzer is connected with OLAP Processor. OLE DB Connectivity makes Bex
Analyzer connects with BIW.
116) What is field partitioning in CO-PA?
A) Internally allocates space in database. If needed table resides in one or few
partitions, then only these partitions will be selected and examined by SQL statement,
therby significantly reducing I/O volume.
117) Where to check the log for warning messages appearing in activation of transfer
rules?
A) If transfer rules are not defined for Info objects, then traffic lights will not be green.
118) What are the advantages of reporting on an infocube to that of reporting on an
ODS?
A) Query performance will be good with Infocube. Infocube has multidimensional model
where as ODS is a flat table. Aggregates and Multi provider can be built upon Infocube,
which will enhance the Query performance. Aggregates and mutiproviders cannot be
built on ODS.
119) How does a navigational attribute differ from other attributes in terms of linking it
with the infocube?
A) TBD
120) How does delta update mechanism work in ODS?
A) ODS has three database tables. New Table, Active Table and Change Log Table.
Initially new data are loaded and their traces are kept in Change log table. When
another set of data comes, it actually compares with change log and transfers the data

(delta data) into active table and also notes in Change log. Everytime the tables are
compared and data is written into the targets.
121) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For
example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004),
and then moves to North Zone from Jan31 st 2004.Thus the master data with regard to
Sales person A, should be changed to differnt zone based on a time
122) Can we load transaction data into infocube without loading the master data first?
A) yes.
123) What is difference between saving and activating?
A) In BIW, Saving--> actually saves the defined structure and retrieves whenever
required.
B) Activating---> It saves and generates required tables and structures.
124) Why do we use only one client in BW?
125) What is time dependent master data?
A) Time dependant master data are one which keeps changing according to time. For
example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004),
and then moves to North Zone from Jan31st 2004. Thus the master data with regard to
Sales person A, should be changed to different zone based on a time
126) What are the advantages of aggregates?
A) Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates
serve, in a similar way to database indexes, to improve performance.
127) In which situations we cannot use aggregates?
A) if data provider is ODS.
128) Aggregates are recommended in the following cases,
A) The execution and navigation of query data leads to delays with a group of queries.
B) You want to speed up the execution and navigation of a specific query.
C) You often use attributes in queries.
D) You want to speed up reporting with characteristic hierarchies by aggregating
specific hierarchy levels.
129) What does delta initialization do?
A) It makes BW to expect the data from Sources, after full update. It initializes the delta
Update mechanism for that datasource.
130) What is difference between delta and pseudo delta?
A) Some data target and module has delta Update feature. Those can be used for delta
Update of data. Say ODS, COPA are delta capable. data can be expected stage wise.
After first accumulation of data, BIW expects the data in delta wise for these data

target. When the other data target do not have these feature (delta update), they can be
made delta capable using ODS as data target.
131) What are the Third Normal Form and its comparison with Star Schema?
A) Third normal form is normalized form of storing data in a relational database. It
eliminates functional dependencies on non-key fields by putting them in a separate
table. At this stage, all non-key fields are dependent on the key, the whole key and
nothing but the key.
B) Star schema is a denormalized form of storing data, which paves the path for storing
data in a multi-dimensional model.
132) What is ASAP methodology
A) ASAP is a standard methodology for efficiently implementing and continually
optimizing the SAP software. ASAP supports the implementation of the R/3 System and
of mySAP.com Components, and can also be used for upgrade projects. It provides a
wide range of tools that helps in all stages of implementation project - from project
planning to the continual improvement of the SAP System. The two key tools in ASAP
are: The Implementation Assistant, which contains the ASAP Roadmap, and provides a
structured framework for your implementation, optimization or upgrade project. The
Question & Answer database (Q&Adb), which allows you to set your project scope and
generate your Business Blueprint using the SAP Reference Structure as a basis.
133) Significance of infoset.
A) Infoset describes data sources that are defined as a rule as joins of ODS objects or
Info Objects. An Infoset is a semantic view of data sources and is not a physical data
target in itself. One can define reports in the BEx Query designer using activated info
sets.
134) Differences between multicube and remote cube.
A) A Multicube is a type of Info Provider that combines data from a number of Info
Providers and makes them available as a whole to reporting.
B) A Remote Cube is an InfoCube whose transaction data is not managed in the
Business Information Warehouse but externally. Only the structure of the Remote Cube
is defined in BW. The data is read for reporting using a BAPI from another system.
135) Life period of data in Change Log of an ODS.
A) The data of Change Log can be scheduled to be deleted periodically. Usually the Data
is removed after it has been updated into the data targets.
136) Drilldown method of Infocube to ODS.
A) A multi provider can be designed to include the ODS and the Infocube in question.
This gives a chance to drilldown from Infocube to the ODS.
137) What are inbound ODS and consistent ODS?
A) In an Inbound ODS object, the data is saved in the same form as they are when
delivered from the source system. This ODS type can be used to report the original data
as it comes from the source system.
B) In a Consistent ODS object, data is stored in granular form and consolidated. This

consolidated data on a document level creates the basis for further processing in BW.
138) Life period of data in PSA.
A) Data in PSA is deleted when one feels that there is no need for any use of it in future.
There is a trade off between wastage of space and usage as a back up for data in the
source system.
139) How to load data from one infocube to another ?
A) A data source is created from the infocube which is supposed to feed. This can be
done by right-clicking on the infocube and selecting export data source. Then a
suitable infosource can be created for this data source. And the intended data target
infocube can be fed.
140) What is activation of objects ?
A) Activation of objects enables them to be executed, in other words used elsewhere for
different purposes. Unless an object is activated it cannot be used.
141) Are key figures navigable ?
A) No, key figures are not navigable.
142) What is transactional ODS?
A) A transactional ODS object differs from a standard ODS object in the way it prepares
data. In a standard ODS object, data is stored in different versions (active, delta,
modified), whereas a transactional ODS object contains the data in a single version.
Therefore, data is stored in precisely the same form in which it was written to the
transactional ODS object by the application.
143) Are SIDs static or dynamic?
A) SIDs are static.
144) Is data in Infocube editable?
A) No.
145) What are data-marts?
A) A data mart is also known as a local data warehouse. It is an implementation of a
data warehouse with a restricted scope of content, with support for analytical
processing and serving a single department, part of an organization, or a particular
data analysis problem domain.
146) Which one is more denormalized; ODS or Infocube?
A) Infocube is more normalized than ODS.
147) Is CO-PA delta capable ?
A) Yes, CO-PA is delta capable.
148) What is replication of data source process ?
A) Replication of data source enables the extract structure from the source system to be
replicated in the target system.

149) Any quality checks available for inefficient cube designs ?


A) Huge Dimension tables make a cube inefficient.
150) Why not star-schema is implemented for ODS as well ?
A) Because ODS is meant to store a detailed document for quick perusal and help make
short-term decisions.
151) Why do we need separate update rules for characteristics on each key figure?
A) It is dependent on the Business requirement.
152) Use of Hierarchies.
A) Efficient reporting is one of the targets of using hierarchies. Easy drilldown paths
can be built using hierarchies.
153) What is "Referential Integrity"?
A) A feature provided by relational database management systems (RDBMS's) that
prevents users or applications from entering inconsistent data. For example, suppose
Table B has a foreign key that points to a field in Table A. Referential integrity would
prevent you from adding a record to Table B that cannot be linked to Table A. In
addition, the referential integrity rules might also specify that whenever you delete a
record from Table A, any records in Table B that are linked to the deleted record will
also be deleted. This is called cascading delete. Finally, the referential integrity rules
could specify that whenever you modify the value of a linked field in Table A, all records
in Table B that are linked to it will also be modified accordingly. This is called cascading
update.
154) What is a Transactional Cube and when is it preferred?
A) Transactional InfoCubes differ from Basic InfoCubes in their ability to support
parallel write accesses. Basic InfoCubes are technically optimized for read accesses to
the detriment of write accesses. Transactional cubes are designed to meet the demands
of SEM, where multiple users write simultaneously into a cube and data is read as soon
as possible.
155) When is the data in Change Log table of ODS deleted.
A) Deleting data from the change log for an ODS object is recommended if several
requests, which are no longer required for the delta update and also are no longer used
for an initialization from the change log, have already been loaded into the ODS object.
If a delta initialization for the update exists in connected data targets, the requests have
to be updated first before the respective data can be deleted in the change log.
156) On what occasions do we have different update rules for each of the Key Figures in
an Info Cube and how would data be stored in such cases.
A) If we want to give different values to characteristics depending on each of the key
figure values, we have different update rules. Say we have two keyfigures, cost and
profit, if we have a entry for account type, depending on each of keyfigure we can
classfiy account as high cost, low cost or high profit or low profit. If we have seperate
update rules for each of the key Figures, there can be multiple rows in the infocube
corresponding to each row in the transaction data.

157) When are "Hierarchies" used in an info object and how do they differ from the
hierarchies available in BEx while querying.
A) Hierarchies are used for modeling hierarchical structures. Hierarchies defined in info
objects should be loaded like master data, whereas it is needed creating hierarchies in
BEx while querying. Further in BEx we have the flexibility of exchanging the nodes and
leaves.
158) What kinds of data fields are used in Line Items, Transactional Figures and Cost of
Sales Ledger?
A) Check the respective tables in R/3.
159) What are Aggregates and when are they used?
A) An aggregate is a materialized, aggregated view of the data in an InfoCube. In an
aggregate, the dataset of an InfoCube is saved redundantly and persistently in a
consolidated form into the database. Aggregates make it possible to access InfoCube
data quickly in Reporting. Aggregates can be used in following cases:
1. The execution and navigation of query data leads to delays with a group of queries.
2. You want to speed up the execution and navigation of a specific query.
3. You often use attributes in queries.
4. You want to speed up reporting with characteristic hierarchies by aggregating specific
hierarchy levels.
160) How is the data of different modules stored in R/3?
A) Data is stored in multiple tables in R/3 based on ERM (Entity Relationship) model to
prevent the reduntant storage of data.
161) In what cases to we transfer data from one info cube to another.
A) Modifications can't be made to an infocube if there is data present in the infocube. If
we want to modify an infocube and no backup for data exist then we can design another
infocube with the parameters specified and load data from the old infocube.
162) How often do we have a Multi-layered structure in ODS stage and in what cases.
A) Multi-layered structure in ODS stage is used to consolidate data from different data
sources.
163) How is data extracted from systems other than R/3 and Flat files?
A) Data is extracted from systems other than R/3 and flat files using staging BAPI's.
164) When do TRFC and iDOC errors occur?
A) An intermediate document (IDoc) is a container for exchanging data between R/3,
R/2 and non-SAP systems. IDocs are sent in the communication layer by transactional
Remote Function Call (tRFC) or by other file interfaces (for example, EDI). tRFC
guarantees that the data is transferred once only. Was not able to find out when the
errors occur.
165) On what occasions do the key figures become attributes of characteristics?
A) When we want to display that particular key figure as display attribute in the report.

Key figures can only be made a display attribute of infoobjects. Suppose we are
reporting on performance of each of sales person, we can declare salary of the sales
person, as an attribute. Further key figures like net price (price per unit quantiy or
price per item) used as an attribute of product can be used to calculate key figures like
total price ( by multiplying net price with quantity using formulas).
166) Why is there a restriction of 16 Dim tables in an Info Cube and 16 key fields in an
ODS.
167) On what factors does the loading time depend on?
A) Loading time depends on the work load both on the BW side and source system side.
It might also depend upon the network connectivity.
168) How long does it take to load a million records into an info cube from an R/3
system?
A) Depending on work load on BW side and source system side loading time varies.
Typically it takes half an hour to load a million records.
169) Will the loading time be same for the same amount of data for non-SAP systems
like Flat files.
A) It might not be the same, it depends on the extraction programs used on the source
system side.
170) Can you tell me about a situation when you implemented a Remote Cube.
A) Remote cube is used when we like to report on transactional data. In a remote cube
data is not stored on BW side. Ideally used when detailed data is required and we want
to bypass loading of data into BW.
171) What is mySAP.com?
A) SAP solution to integrate all relevant business processes on the Internet. mySAP.com
integrates business processes in SAP and non-SAP systems seamlessly, and provides a
complete business environment for electronic commerce.
172) How is BW superior to other data warehousing tools (if it is superior)?
A) SAP BW provides, good compatibility with other SAP products.
173) Can we just load the transaction data without loading the master data from a
source system when we are sure we are not going to query on the master data.
A) Yes you can.
174) What is operating concern and partitioning in CO-PA.
A) Operating concern is set of characteristics based on which we want to analyze the
performance of company. Partitioning is dividing the data into different datasets
depending on a certain characteristics. Partitioning enables parallel access of data.
175) What is the difference between value fields and key figures in CO-PA.
A) Value fields comprises of data which CO-PA gets from various modules in R/3.
Whereas key figures are derived from these value fields.

176) How is the performance of an info cube measured?


A) Infocube performance can be measured based upon query response time.
177) What factors are used in measuring the performance of a query?
A) Query response time is used for measuring the performance of a query.
178) What is process chain and how you used it?
A) We have used process chains to automate the delta loading process. Once you are
finished with your design and testing you can automate the processes listed in RSPC. I
have a real time example in the attachment.
179) What are Remote cubes and how you accessed and used it in your project?
A) Its an Info Provider which does not physically store data, but used for non-trivial
reporting. I have not used but an example would be say you want to compare the data
consistency b/w R/3 and BW you can generate report on a remote cube and compare
with a report in BW
180) Hope you have worked on enhancements and on which user exit you worked can
you explain?
181) What is the t-code for generic extractor?
A) RSO2
182) What is infoset query?
A) InfoSet is an Info Provider which does not store data, its only a view and needs to be
built as a join. In treasury we have built the currency exchange report. This report is
not used often and so its stored in an ODS. So we built an InfoSet to get data from
another object and built the report. On an ODS once you say its reportable and start
running a query its no longer a flat table but follows a star schema and reporting
becomes slow
183) What is the purpose of aggregates?
A) They are used to store frequently reporting data. Once you fill in an aggregate and
activate, Bex checks for aggregates before running a query and brings the data much
faster. So basically query performance improves a lot.
184) How you did Data modeling in your project? Explain
A) Initially we study the business process of client, like what kind of data is flowing in
the system, the volume, changes taking place in it, the analysis done on the data by
users, what are they expecting in the future, how can we use the BW functionality.
Later we have meetings with business analyst and propose the data model, based on
the client. Later we give a proof of concept demo wherein we demo how are we going to
build a BW data warehouse for their system. Once you get an approval start
requirement gatherings and building your model and testing follows in QA
185) As you said you have worked on Cubes and ODS,Which one is better suited for
reporting?Expalin and what are the drawbacks n benefits of each one

A) Depending on what you want to report we store the data in Cube/ODS. Generally BW
is used to store high volumes of data and faster reporting, wherein InfoCube is used as
it stores normalized data. We store master data in other tables and transaction data
which are basically numbers are stored in cube. So basically the property of indexing
works here and the reporting is fast as we have only numeric in a cube.
B) When you load master data first the SIDs are created for that data. When you load
the transaction data it looks for the master data SIDs and gets linked using DIMs. You
have this in a cube. So your reporting is going to be fast as both of them are numbers.
C) In an ODS we store data which is of more detail utilizing its structure of flat file .
reporting on this will be slow because of the reason in ans 5.
186) What are the different cubes you worked in FI?
187) What is delta upload?What is the use of deltaupload? Data that has been changed
or added is extractor or full data is extractor?
A) To load real time data and make accurate decisions we use delta upload.
188) What are hierarchies?Explain how you used in your project?
189) What is t-code for CO-PA?
KEB0
190) What is SID ? what is the impact in using SID?
191) What is Table partitioning? What are Return Tables?
192) What is the t-code for Query Monitor?RSRT
193) Apart from R/3 ,which legacy db you used for extraction ?
A) Access, Informatica
194) What are the three ODS Objects table explain?
195) Can you explain about Start routines how you used in your project ,give me an
example?

SAP BW FAQ
BW Query Performance
Question:
1. What kind of tools are available to monitor the overall Query Performance?
Answers:
o BW Statistics

o BW Workload Analysis in ST03N (Use Export Mode!)


o Content of Table RSDDSTAT
Question:
2. Do I have to do something to enable such tools?
Answer:
o Yes, you need to turn on the BW Statistics:
RSA1, choose Tools -> BW statistics for InfoCubes
(Choose OLAP and WHM for your relevant Cubes)
Question:
3. What kind of tools are available to analyse a specific query in detail?
Answers:
o Transaction RSRT
o Transaction RSRTRACE
Question:
4. Do I have a overall query performance problem?
Answers:
o Use ST03N -> BW System load values to recognize the problem. Use the
number given in table 'Reporting - InfoCubes:Share of total time (s)'
to check if one of the columns %OLAP, %DB, %Frontend shows a high
number in all InfoCubes.
o You need to run ST03N in expert mode to get these values
Question:
5. What can I do if the database proportion is high for all queries?
Answers:
Check:
o If the database statistic strategy is set up properly for your DB platform
(above all for the BW specific tables)
o If database parameter set up accords with SAP Notes and SAP Services (EarlyWatch)
o If Buffers, I/O, CPU, memory on the database server are exhausted?
o If Cube compression is used regularly
o If Database partitioning is used (not available on all DB platforms)
Question:
6. What can I do if the OLAP proportion is high for all queries?
Answers:
Check:
o If the CPUs on the application server are exhausted

o If the SAP R/3 memory set up is done properly (use TX ST02 to find
bottlenecks)
o If the read mode of the queries is unfavourable (RSRREPDIR, RSDDSTAT,
Customizing default)
Question:
7. What can I do if the client proportion is high for all queries?
Answer:
o Check whether most of your clients are connected via a WAN Connection and the
amount of data which is transferred is rather high.
Question:
8. Where can I get specific runtime information for one query?
Answers:
o Again you can use ST03N -> BW System Load
o Depending on the time frame you select, you get historical data or
current data.
o To get to a specific query you need to drill down using the InfoCube
name
o Use Aggregation Query to get more runtime information about a
single query. Use tab All data to get to the details.
(DB, OLAP, and Frontend time, plus Select/ Transferred records,
plus number of cells and formats)
Question:
9. What kind of query performance problems can I recognize using ST03N
values for a specific query?
Answers:
(Use Details to get the runtime segments)
o High Database Runtime
o High OLAP Runtime
o High Frontend Runtime
Question:
10. What can I do if a query has a high database runtime?
Answers:
o Check if an aggregate is suitable (use All data to get values
"selected records to transferred records", a high number here would
be an indicator for query performance improvement using an aggregate)
o Check if database statistics are update to data for the
Cube/Aggregate, use TX RSRV output (use database check for statistics

and indexes)
o Check if the read mode of the query is unfavourable - Recommended (H)
Question:
11. What can I do if a query has a high OLAP runtime?
Answers:
o Check if a high number of Cells transferred to the OLAP (use
"All data" to get value "No. of Cells")
o Use RSRT technical Information to check if any extra OLAP-processing
is necessary (Stock Query, Exception Aggregation, Calc. before
Aggregation, Virtual Char. Key Figures, Attributes in Calculated
Key Figs, Time-dependent Currency Translation)
together with a high number of records transferred.
o Check if a user exit Usage is involved in the OLAP runtime?
o Check if large hierarchies are used and the entry hierarchy level is
as deep as possible. This limits the levels of the
hierarchy that must be processed. Use SE16 on the inclusion
tables and use the List of Value feature on the column successor
and predecessor to see which entry level of the hierarchy is used.
- Check if a proper index on the inclusion table exist
Question:
12. What can I do if a query has a high frontend runtime?
Answers:
o Check if a very high number of cells and formattings are transferred
to the Frontend ( use "All data" to get value "No. of Cells") which
cause high network and frontend (processing) runtime.
o Check if frontend PC are within the recommendation (RAM, CPU Mhz)
o Check if the bandwidth for WAN connection is sufficient
What is ODS?
It is operational data store. ODS is a BW Architectural component that appears between
PSA ( Persistant Staging Area ) and infocubes and that allows Bex ( Business Explorer )
reporting. It is not based on the star schema and is used primarily for details reporting,
rather than for dimensional analysis. ODS objects do not aggregate data as infocubes do.
Data are loaded into an IDS object by inserting new records, updating existing records, or
deleting old records as specified by RECORDMODE value. *-- Viji
1. How much time does it take to extract 1 million of records from an infocube?
2. How much does it take to load (before question extract) 1 million of records to an
infocube?

3. What are the four ASAP Methodologies?


4. How do you measure the size of infocube?
5. Difference between infocube and ODS?
6. Difference between display attributes and navigational attributes? *-- Kiran
1. Ans. This depends,if you have complex coding in update rules it will take longer
time,orelse it will take less than 30 mins.
3. Ans:
Project plan
Requirements gathering
Gap Analysis
Project Realization
4. Ans:
In no of records
5. Ans:
Infocube is structured as star schema(extended) where in a fact table is surrounded by
different dim table which connects to sids. And the data wise, you will have aggregated
data in the cubes.
ODS is a flat structure(flat table) with no star schema concept and which will have
granular data(detailed level).
6. Ans:
Display attribute is one which is used only for display purpose in the report.Where as
navigational attribute is used for drilling down in the report.We don't need to maintain
Nav attribute in the cube as a characteristic(that is the advantage) to drill down.
*-- Ravi
Q1. SOME DATA IS UPLOADED TWICE INTO INFOCUBE. HOW TO CORRECT
IT?
Ans: But how is it possible?.If you load it manually twice, then you can delete it by
request.
Q2. CAN U ADD A NEW FIELD AT THE ODS LEVEL?
Sure you can.ODS is nothing but a table.
Q3. CAN NUMBER OF DATASOURCE HAS ONE INFOSOURCE?
Yes ofcourse.For example, for loading text and hierarchies we use different data sources
but the same infosource.

Q4. BRIEF THE DATAFLOW IN BW.


Data flows from transactional system to analytical system(BW).DS on the transactional
system needs to be replicated on BW side and attached to infosource and update rules
respectively.
Q5. CURRENCY CONVERSIONS CAN BE WRITTEN IN UPDATE RULES. WHY
NOT IN TRANSFER RULES?
Q6. WHAT IS PROCEDURE TO UPDATE DATA INTO DATA TARGETS?
Full and delta.
Q7. AS WE USE Sbwnn,SBiw1,sbiw2 for delta update in LIS THEN WHAT IS THE
PROCEDURE IN LO-COCKPIT?
No lis in lo cockpit.We will have data sources and can be maintained(append
fields).Refer white paper on LO-Cokpit extractions.
Q8. SIGNIFICANCE OF ODS.
It holds granular data.
Q9. WHERE THE PSA DATA IS STORED?
In PSA table.
Q10.WHAT IS DATA SIZE?
The volume of data one data target holds(in no.of records)
Q11. DIFFERENT TYPES OF INFOCUBES.
Basic,Virtual(remote,sap remote and multi)
Q12. INFOSET QUERY.
Can be made of ODSs and objects
Q13. IF THERE ARE 2 DATASOURCES HOW MANY TRANSFER STRUCTURES
ARE THERE.
In R/3 or in BW??.2 in R/3 and 2 in BW
Q14. ROUTINES?
Exist In the info object,transfer routines,update routines and start routine
Q15. BRIEF SOME STRUCTURES USED IN BEX.
Rows and Columns,you can create structures.
Q16. WHAT ARE THE DIFFERENT VARIABLES USED IN BEX?
Variable with default entry
Replacement path
SAP exit

Customer exit
Authorization
Q17. HOW MANY LEVELS YOU CAN GO IN REPORTING?
You can drill down to any level you want using Nav attributes and jump targets
Q18. WHAT ARE INDEXES?
Indexes are data base indexes,which help in retrieving data fastly.
Q19. DIFFERENCE BETWEEN 2.1 AND 3.X VERSIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q20. IS IT NESSESARY TO INITIALIZE EACH TIME THE DELTA UPDATE IS
USED.
Nope
Q21. WHAT IS THE SIGNIFICANCE OF KPI'S?
KPIs indicate the performance of a company.These are key figures
Q22. AFTER THE DATA EXTRACTION WHAT IS THE IMAGE POSITION.
After image(correct me if I am wrong)
Q23. REPORTING AND RESTRICTIONS.
Help!!!!!!!!!!!!!!!!!!!Refer documentation
Q24. TOOLS USED FOR PERFORMANCE TUNING.
ST*,Number ranges,delete indexes before load ..etc
Q25. PROCESS CHAINS: IF U ARE USED USING IT THEN HOW WILL U
SCHEDULING DATA DAILY.
There should be some tool to run the job daily(SM37 jobs)
Q26. AUTHORIZATIONS.
Profile generator
Q27. WEB REPORTING.
What are you expecting??
Q28. CAN CHARECTERSTIC CAN BE INFOPROVIDER ,INFOOBJECT CAN BE
INFOPROVIDER.
Of course
Q29. PROCEDURES OF REPORTING ON MULTICUBES.
Refer help.What are you expecting??.Multicube works on Union condition

Q30. EXPLAIN TRANPORTATION OF OBJECTS?


Dev ---> Q and Dev ---> P

SAP BW Interview Questions 2


1) What is process chain? How many types are there? How many we use in real
time scenario? Can we define interdependent processes with tasks like data loading,
cube compression, index maintenance, master data & ods activation in the best
possible performance & data integrity.
2) What is data integrityand how can we achieve this?
3) What is index maintenance and what is the purpose to use this in real time?
4) When and why use infocube compression in real time?
5) What is mean by data modelling and what will the consultant do in data
modelling?
6) How can enhance business content and what for purpose we enhance business
content (becausing we can activate business content)
7) What is fine-tuning and how many types are there and what for purpose we done
tuning in real time. tuning can only be done for infocube partitions and creating
aggregates or any other?
8) What is mean by multiprovider and what purpose we use multiprovider?
9) What is scheduled and monitored data loads and for what purpose?
Ans # 1:
Process chains exists in Admin Work Bench. Using these we can automate ETTL
processes. These allows BW guys to schedule all activities and monitor (T Code: RSPC).
PROCESS CHAIN - Before defining PROCESS CHAIN, let us define PROCESS in any
given process chain. Is a procedure either with in the SAP or external to it with a start and
end. This process runs in the background.
PROCESS CHAIN is set of such processes that are linked together in a chain. In other
words each process is dependent on the previous process and dependencies are clearly
defined in the process chain.
This is normally done in order to automate a job or task that has to execute more than one
process in order to complete the job or task.
1. Check the Source System for that particular PC.
2. Select the request ID (it will be in Header Tab) of PC
3. Go to SM37 of Source System.
4. Double Click on the Job.
5. You will navigate to a screen
6. In that Click "Job Details" button
7. A small Pop-up Window comes
8. In the Pop-up screen, take a note of
a) Executing Server
b) WP Number/PID

9. Open a new SM37 (/OSM37) command


10. In the Click on "Application Servers" button
11. You can see different Application Servers.
11. Goto Executing server, and Double Click (Point 8 (a))
12. Goto PID (Point 8 (b))
13. On the left most you can see a check box
14. "Check" the check Box
15. On the Menu Bar.. You can see "Process"
16. In the "process" you have the Option "Cancel with Core"
17. Click on that option.
* -- Ramkumar K
Ans # 2:
Data Integrity is about eliminating duplicate entries in the database and achieve
normalization.
Ans # 4:
InfoCube compression creates new cube by eliminating duplicates. Compressed
infocubes require less storage space and are faster for retrieval of information. Here the
catch is .. Once you compress, you can't alter the InfoCube. You are safe as long as you
don't have any error in modeling.
This compression can be done through Process Chain and also manually.
Tips by: Anand
Ans#3
Indexing is a process where the data is stored by indexing it. Eg: A phone book... When
we write somebodys number we write it as Prasads number would be in "P" and Rajesh's
number would be in "R"... The phone book process is indexing.. similarly the storing of
data by creating indexes is called indexing.
Ans#5
Datamodeling is a process where you collect the facts..the attributes associated to facts..
navigation atributes etc.. and after you collect all these you need to decide which one you
ill be using. This process of collection is done by interviewing the end users, the power
users, the share holders etc.. it is generally done by the Team Lead, Project Manager or
sometimes a Sr. Consultant (4-5 yrs of exp) So if you are new you dont have to worry
about it....But do remember that it is a imp aspect of any datawarehousing soln.. so make
sure that you have read datamodeling before attending any interview or even starting to
work....
Ans#6
We can enhance the Business Content bby adding fields to it. Since BC is delivered by
SAP Inc it may not contain all the infoobjects, infocubes etc that you want to use
according to your company's data model... eg: you have a customer infocube(In BC) but

your company uses a attribute for say..apt number... then instead of constructing the
whole infocube you can add the above field to the existing BC infocube and get going...
Ans#7
Tuning is the most imp process in BW..Tuning is done the increase efficiency.... that
means lowering time for loading data in cube.. lowering time for accessing a query..
lowering time for doing a drill down etc.. fine tuning=lowering time(for everything
possible)...tuning can be done by many things not only by partitions and aggregates there
are various things you can do... for eg: compression, etc..
Ans#8
Multiprovider can combine various infoproviders for reporting purposes.. like you can
combine 4-5 infocubes or 2-3 infocubes and 2-3 ODS or IC, ODS and Master data.. etc..
you can refer to help.sap.com for more info...
Ans#9
Scheduled data load means you have scheduled the loading of data for some particular
date and time you can do it in scheduler tab if infoobject... and monitored means you are
monitoring that particular data load or some other loads by using transaction RSMON.
What is the purpose of setup tables?
Setup tables are kind of interface between the extractor and application tables. LO
extractor takes data from set up table while initalization and full upload and hitting the
application table for selection is avoided. As these tables are required only for full and
init load, you can delete the data after loading in order to avoid duplicate data. Setup
tables are filled with data from application tables.The setup tables sit on top of the actual
applcation tables (i.e the OLTP tables storing transaction records). During the Setup run,
these setup tables are filled. Normally it's a good practice to delete the existing setup
tables before executing the setup runs so as to avoid duplicate records for the same
selections
We are having Cube. what is the need to use ODS. what is the necessary to use ODS
though we are having cube?
1) Remember cube has aggregated data and ods has granular data.
2) In update rules of a infocube you do not have option for over write whereas for a ods
the default is overwrite.
What is the importance of transaction RSKC? How it is useful in resolving the
issues with speial characters.
How to handle double data loading in SAP BW?
What do you mean by SAP exit, User exit, Customer exit?

What are some of the production support isues-trouble shooting guide?


When we go for Business content extraction and when go for LO/COPA extraction?
What are some of the few infocube name in SD and MM that we use for extraction
and load them to BW?
How to create indexes on ODS and fact tables?
What are data load monitor (RSMO or RSMON)?
1A. RSKC.
Using this T-code, you can allow BW system to accept special char's in the data coming
from source systems. This list of chars can be obtained after analyzing source system's
data OR can be confirmed with client during design specs stage.
2A. Exit.s
These exits are customized for handling data transfer in various scenarios.
(Ex. Replacement Path in Reports- > Way to pass variable to BW Report)
Some can be developed by BW/ABAP developer and inserted wherever its required.
Some of these programs are already available and part of SAP Business Content. These
are called SAP Exits. Depends on the requirement, we need to extend some exits and
customize.
3A.
Production issues are different for each BW project and most common issues can be
obtained from some of the previous mails. (data load issues).
4A.
LIS Extraction is kind of old school type and not preferred with big BW systems. Here
you can expect issues related to performance and data duplication in set up tables.
LO extraction came up with most of the advantages and using this, you can extend
exiting extract structures and use customized data sources.
If you can fetch all required data elements using SAP provided extract structures, you
don't need to write custom extractions... You can get clear idea on this after analyzing
source system's data fields and required fields in target system's data target's structure.
5A.

MM - 0PUR_C01(Purchasing data) , OPUR_C03 (Vendor Evaluation)


SD - 0SD_CO1(Customer),0SD_C03( Sales Overview) ETC..
6A.
You can do this by choosing "Manage Data Target" option and click on few buttons
available in "performance" tab.
7A.
RSMO is used to monitor data flow to target system from source system. You can see
data by request, source system, time request id etc.... just play with this..

Daily Tasks in Support Role and Infopackage Failures


1. Why there is frequent load failures during extractions? and how they are going to
analyse them?
If these failures are related to Data,, there might be data inconsistency in source
system..though you are handling properly in transfer rules. You can monitor these issues
in T-code -> RSMO and PSA (failed records).and update .
If you are talking about whole extraction process, there might be issues of work process
scheduling and IDoc transfer to target system from source system. These issues can be
re-initiated by canceling that specific data load and ( usually by changing Request color
from Yellow - > Red in RSMO).. and restart the extraction.
2. Can anyone explain briefly about 0record modes in ODS?
ORECORDMODE is SAP Delivered object and will be added to ODS object while
activating. Using this ODS will be updated during DELTA loads.. This has three possible
values ( X D R).. D & R is for deleting and removing records and X is for skipping
records during delta load.
3. What is reconciliation in bw? What the procedure to do reconciliation?
Reconcilation is the process of comparing the data after it is transferred to the BW system
with the source system. The procedure to do reconcilation is either you can check the data
from the SE16 if the data is coming from a particular table only or if the datasource is any
std datasource then the data is coming from the many tables in that scenario what I used
to do ask the R/3 consultant to report on that particular selections and used to get the data
in the excel sheet and then used to reconcile with the data in BW . If you are familiar with
the reports of R/3 then you are good to go meaning you need not be dependant on the R/3
consultant ( its better to know which reports to run to check the data ).

4. What is the daily task we do in production support.How many times we will


extract the data at what times.
It depends... Data load timings are in the range of 30 mins to 8 hrs. This time is depends
in number of records and kind of transfer rules you have provided. If transfer rules have
some kind of round about transfer rules and updates rules has calculations for customized
key figures... long times are expected..
Usually You need to work on RSMO and see what records are failing.. and update from
PSA.
5. What are some of the frequent failures and errors?
As the frequent failures and errors , there is no fixed reason for the load to be fail , if you
want it for the interview perspective I would answer it in this way.
a) Loads can be failed due to the invalid characters
b) Can be because of the deadlock in the system
c) Can be becuase of previuos load failure , if the load is dependant on other loads
d) Can be because of erreneous records
e) Can be because of RFC connections
These are some of the reasons for the load failures

Difference Between BW Technical and Functional


In general Functional means, derive the funtional specification from the business
requirement document. This job normally is done either by the business analyst or system
analyst who has a very good knowledge of the business. In some large organizations there
will be a business analyst as well as system analyst.
In any business requirement or need for new reports or queries originates with the
business user. This requirement will be recorded after discussion by the business analyst.
A system analyst analyses these requirements and generates functional specification
document. In the case of BW it could be also called logical design in DATA
MODELING.
After review this logical desing will be translated to physical design . This process
defines all the required dimensions, key figures, master data, etc.
Once this process is approved and signed off by the requester(users), then conversion of
this into practically usable tasks using the SAP BW software. This is called Technical.
The whole process of creating an InfoProvider, InfoObjects, InforSources, Source
system, etc falls under the Technical domain.

What is the role of consultant has to play if the title is BW administrator? What is
his day to day activity and which will be the main focus area for him in which he
should be proficient?
BW Administartor - is the person who provides Authorization access to different Roles,
Profiles depending upon the requirement.
For eg. There are two groups of people : Group A and Group B.
Group A - Manager
Group B - Developer
Now the Authorization or Access Rights for both the Groups are different.
So for doing this sort of activity.........we required Administrator.

Difference Between PSA and ALE IDoc


What is difference between PSA and ALE IDoc? And how data is transferd using
each one of them?
The following update types are available in SAP BW:
1. PSA
2. ALE (data IDoc)
You determine the PSA or IDoc transfer method in the transfer rule maintenance screen.
The process for loading the data for both transfer methods is triggered by a request IDoc
to the source system. Info IDocs are used in both transfer methods. Info IDocs are
transferred exclusively using ALE
A data IDoc consists of a control record, a data record, and a status record The control
record contains, for example, administrative information such as the receiver, the sender,
and the client. The status record describes the status of the IDoc, for example,
"Processed". If you use the PSA for data extraction, you benefit from increased flexiblity
(treatment of incorrect data records). Since you are storing the data temporarily in the
PSA before updating it in to the data targets, you can check the data and change it if
necessary. Unlike a data request with IDocs, the PSA gives you various options for
additional data updates into data targets:
InfoObject/Data Target Only - This option means that the PSA is not used as a temporary
store. You choose this update type if you do not want to check the source system data for
consistency and accuracy, or you have already checked this yourself and are sure that you
no longer require this data since you are not going to change the structure of the data
target again.

PSA and InfoObject/Data Target in Parallel (Package by Package) - BW receives the data
from the source system, writes the data to the PSA and at the same time starts the update
into the relevant data targets. Therefore, this method has the best performance.
The parallel update is described in detail in the following: A dialog process is started by
data package, in which the data of this package is writtein into the PSA table. If the data
is posted successfully into the PSA table, the system releases a second, parallel dialog
process that writes the data to the data targets. In this dialog process the transfer rules for
the data records of the data package are applied, that data is transferred to the
communcation structure, and then written to the data targets. The first dialog process
(data posting into the PSA) confirms in the source system that is it completed and the
source system sends a new data package to BW while the second dialog process is still
updating the data into the data targets.
The parallelism relates to the data packages, that is, the system writes the data packages
into the PSA table and into the data targets in parallel. Caution: The maximum number of
processes setin the source system in customizing for the extractors does not restrict the
number of processes in BW. Therefore, BW can require many dialog processes for the
load process. Ensure that there are enough dialog processes available in the BW system.
If there are not enough processes on the system side, errors occur. Therefore, this method
is the least recommended.
PSA and then into InfoObject/Data Targets (Package by Package) - Updates data in series
into the PSA table and into the data targets by data package. The system starts one
process that writes the data packages into the PSA table. Once the data is posted
successfuly into the PSA table, it is then written to the data targets in the same dialog
process. Updating in series gives you more control over the overall data flow when
compared to parallel data transfer since there is only one process per data package in BW.
In the BW system the maximum number of dialog process required for each data request
corresponds to the setting that you made in customizing for the extractors in the control
parameter maintenance screen. In contrast to the parallel update, the system confirms that
the process is completed only after the data has been updated into the PSA and also into
the data targets for the first data package.
Only PSA - The data is not posted further from the PSA table immediately. It is useful to
transfer the data only into the PSA table if you want to check its accuracy and consistency
and, if necessary, modify the data. You then have the following options for updating data
from the PSA table:
Automatic update - In order to update the data automatically in the relevant data target
after all data packages are in the PSA table and updated successfully there, in the
scheduler when you schedule the InfoPackage, choose Update Subsequently in Data
Targets on the Processing tab page.
Tips by : Sunil

The Three Layers of SAP BW


SAP BW has three layers:

Business Explorer: As the top layer in the SAP BW architecture, the Business
Explorer (BEx) serves as the reporting environment (presentation and analysis)
for end users. It consists of the BEx Analyzer, BEx Browser, BEx Web, and BEx
Map for analysis and reporting activities.

Business Information Warehouse Server: The SAP BW server, as the middle


layer, has two primary roles:
Data warehouse management and administration: These tasks are handled by the
production data extractor (a set of programs for the extraction of data from R/3
OLTP applications such as logistics, and controlling), the staging engine, and the
Administrator Workbench.
Data storage and representation: These tasks are handled by the InfoCubes in
conjunction with the data manager, Metadata repository, and Operational Data
Store (ODS).

Source Systems: The source systems, as the bottom layer, serve as the data
sources for raw business data. SAP BW supports various data sources:
R/3 Systems as of Release 3.1H (with Business Content) and R/3 Systems prior
to Release 3.1H (SAP BW regards them as external systems)
Non-SAP systems or external systems
mySAP.com components (such as mySAP SCM, mySAP SEM, mySAP CRM,
or R/3 components) or another SAP BW system.

What Is SPRO In BW Project?


1) What is spro?
2) How to use in bw project?
3) What is difference between idoc and psa in transfer methods?
1. SPRO is the transaction code for Implementation Guide, where you can do
configuration settings.
* Type spro in the transaction box and you will get a screen customizing :
Execute Project.
* Click on the SAP Reference IMG button. you will come to Display IMG Screen.
* The following path will allow you to do the configuration settings :
SAP Cutomizing Implementation Guide -> SAP Netweaver ->SAP Business Warehouse
Information.
2. SPRO is used to configure the following settings :
* General Settings like printer settings, fiscal year settings, ODS Object Settings,
Authorisation settings, settings for displaying SAP Documents, etc., etc.,

* Links to other systems : like links between flat files and BW Systems, R/3 and BW,
and other data sources, link between BW system and Microsoft Analysis services, and
crystal enterprise....etc., etc.,
* UD Connect Settings : Like configuring BI Java Connectors, Establishing the RFC
Desitination for SAP BW for J2EEE Engine, Installation of Availability monitoring for
UD Connect.
* Automated Processes: like settings for batch processes, background processes etc., etc.,
* Transport Settings : like settings for source system name change after transport and
create destination for import post-processing.
* Reporting Relevant Settings : Like Bex Settings, General Reporting Settings.
* Settings for Business Content : which is already provided by SAP.
3. PSA : Persistant Staging Area : is a holding area of raw data. It contains detailed
requests in the format of the transfer structure. It is defined according to the Datasource
and source system, and is source system dependent.
IDOCS : Intermediate DOCuments : Data Structures used as API working storage for
applications, which need to move data in or out of SAP Systems.

Data load in SAP BW


What is the strategy to load for example 500,000 entries in BW (material master,
transactional data)?
How to separate this entries in small packages and transfer it to BW in automatic?
Is there some strategy for that?
Is there some configuration for that?
See OSS note 411464 (example concerning Info Structures from purchasing documents)
to create smaller jobs in order to integrate a large amount of data.
For example, if you wish to split your 500,000 entries in five intervals:
- Create 5 variants in RMCENEAU for each interval
- Create 5 jobs (SM36) that execute RMCENEAU for each variant
- Schedule your jobs
- You can then see the result in RSA3

Loading Data From a Data Target


Can you please guide me for carrying out his activity with some important steps?

I am having few request with the without data mart status. How can I use only them
& create a export datasource?
Can you please tell me how my data mechanism will work after the loading?
Follow these steps:
1. Select Source data target( in u r case X) , in the context menu click on Create Export
Datasources.
DataSource ( InfoSource) with name 8(name of datatarget) will be generated.
2. In Modelling menu click on Source Systems, Select the logical Source System of your
BW server, in the context menu click on Replicate DataSource.
3. In the DataModelling click on Infosources and search for infosource 8(name of
datatarget). If not found in the search refresh it. Still not find then from DataModelling
click on Infosources, in right side window again select Infosources, in the context menu
click on insert Lost Nodes.
Now search you will definately found.
4. No goto Receiving DataTargets ( in your case Y1,Y2,Y3) create update rules.
In the next screen select Infocube radio button and enter name of Source Datatarget (in u
r case X). click Next screen Button ( Shift F7), here select Addition radio button, then
select Source keyfield radio button and map the keyfields form Source cube to target
cube.
5. In the DataModelling click on Infosources select infoSource which u replicated earlier
and create infopackage to load data..

Difference in number of data records


-----Original Message----Subject: Difference in number of data records
Hello,
I have uploaded data from R/3 to BW (Controlling Datasources).
The problem is that when i use the extractor checker (rsa3) in R/3 for a
specific datasource (0CO_OM_OPA_1) it shows me that there are 1600 records.
When i load this datasource in BW it shows me that there are 400.000
records. I'm uploading data to "PSA only".
Any ideas why this is happening ?

Thanks
-----Reply Message----Subject: RE: Difference in number of data records
Check the 'data recs/call' and 'number of extract calls' parameters in
RSA3. Most likely the actual extract is only making one call with a larger
data rec/call number. The extraction process will collect data records
with the same key so less data has to be transferred to the BW. When you
run RSA3 you are probably getting similar records (that would normally
collect) in different data packets thereby creating more records. Try
running RSA3 with a much higher (2000) recs/call for several calls.
What is the quickest way to find the R/3 source table and field name for a field appearing
on the BW
InfoSource?
-----Reply Message----Subject: RE: R/3 Source Table.field - How To Find?
Hi,
With some ABAP-knowledge you can find some info:
1, Start ST05 (SQL-trace) in R/3
2, Start RSA3 in R/3 just for some records
3, After RSA3 finishes, stop SQL-trace in ST05
4, Analyze SQL-statements in ST05
You can find the tables - but this process doesn't help e.g for the LO-cockpit datasources.
Hope this helps,

SAP BW versus R/3 Reporting


-----Original Message----Subject: BW versus R/3 Reporting
Dear All,
Would it be sufficient just to Web-enable R/3 Reports ? Why does one need
to implement BW ? What are the major benefits of reporting with BW over
R/3 ?
Thanking you,

-----Reply Message----Subject: RE: BW versus R/3 Reporting


There are quite a few companies that share your thought but R/3 was designed
as a OLTP system and not an analytical and reporting system. In fact
depending on you needs you can even get away with a reporting instance
(quite easy with Sun or EMC storage) Yes you can run as many reports as you
need from R/3 and web enable them but consider these factors.
1: Performance -- heavy reporting along with regular OLTP transactions can
produce a lot of load both on the R/3 and the database (cpu, memory, disks,
etc). Just take a look at the load put on your system during a month end,
quarter end, or year end -- now imagine that occurring even more frequently.
2: Data analysis -- BW uses a Data Ware house and OLAP concepts for storing
and analyzing data. Where R/3 was designed for transaction processing. With
a lot of work you can get the same analysis out of R/3 but most likely would
be easier from a BW.
Regards,
-----Reply Message----Subject: RE: BW versus R/3 Reporting
Major benefits of BW include:
1) By offloading ad-hoc and long running queries from production R/3 system
to BW system, overall system performance should improve on R/3.
2) Another key performance benefit with BW is the database design. It is
designed specifically for query processing, not data updating and OLTP.
Within BW, the data structures are designed differently and are much better
suited for reporting than R/3 data structures. For example, BW utilizes star
schema design which includes fact and dimension tables with bit-mapped
indexes. Other important factors include the built-in support for
aggregates, database partitioning, more efficient ABAP code by utilizing
TRFC processing versus IDOC.
3) Better front-end reporting within BW. Although the BW excel front-end has
it's problems, it provides more flexibility and analysis capability than the
R/3 reporting screens.
4) BW has ability to pull data from other SAP or non-SAP sources into a
consolidated cube.
In summary, BW provides much better performance and stronger data analysis
capabilities than R/3.

Removing '#' in Analyzer (Report)


In ODS, there are records having a value BLANK/EMPTY for some of the fields.
EX: Field: `Creation Date' and there is no value for some of the records.
For the same, when I execute the query in Analyzer, the value `#' is displaying in
place of `BLANK' value for DATE and other Characteristic fields. Here, I want to
show it as `BLANK/SPACE' instead of `#'. How to do this?
I had a similar problem and our client didn't want to see # signs in the report. And this is
what I did. I created a marco in Workbook as SAPBEXonRefresh and run my code in
Visual Basic editor. You can run same code in query also as when you will refresh query
this # sign will be taken care. You can see similar code in SAP market place.
I will still suggest not to take out # sign as this represent no value in DataMart. And this
is SAP standard. I did convince my client for this and later they were OK with it.
The codes are below:
Sub SAPBEXonRefresh(queryID As String, resultArea As Range)
If queryID = "SAPBEXq0001" Then
resultArea.Select
'Remove '#'
Selection.Cells.Replace What:="#", Replacement:="", LookAt:=xlWhole, _
SearchOrder:=xlByRows, MatchCase:=False, MatchByte:=True
'Remove 'Not assigned'
Selection.Cells.Replace What:="Not assigned", Replacement:="", LookAt:=xlWhole, _
SearchOrder:=xlByRows, MatchCase:=False, MatchByte:=True
End If
' Set focus back to top of results
resultArea(1, 1).Select
End Sub

How To Convert Base Units Into Target Units In BW


Reports
My client has a requirement to convert the base units of measurements into target
units of measurements in BW reports. How to write the conversion routine and
otherwise pls refer conversion routine used so that the characteristic value(key) of

an infoobject can be displayed or used in a different format to how they are stored
in the database.
Have a look at the how to document "HOWTO_ALTERNATE_UOM2"
or
You can use the function module 'UNIT_CONVERSION_SIMPLE'
CALL FUNCTION 'UNIT_CONVERSION_SIMPLE'
EXPORTING
input
= ACTUAL QUANTITY
*
NO_TYPE_CHECK
= 'X'
*
ROUND_SIGN
=''
unit_in
= ACTUAL UOM
unit_out
= 'KG' ( UOM YOU WANT TO CONVERY )
IMPORTING
*
ADD_CONST
=
*
DECIMALS
=
*
DENOMINATOR
=
*
NUMERATOR
=
output
= w_output-h_qtyin_kg
*
EXCEPTIONS
*
CONVERSION_NOT_FOUND
=1
*
DIVISION_BY_ZERO
=2
*
INPUT_INVALID
=3
*
OUTPUT_INVALID
=4
*
OVERFLOW
=5
*
TYPE_INVALID
=6
*
UNITS_MISSING
=7
*
UNIT_IN_NOT_FOUND
=8
*
UNIT_OUT_NOT_FOUND
=9
*
OTHERS
= 10
.
IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
*
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.

Deltas Not Working for Installation Master Data


I am having trouble with the deltas for master data object "installation". The
changes are clearly recorded in the time dependent and time independent tables,
EANL/EANLH. The delta update mode is using ALE pointers, does anyone know of
a table where I can go check where these deltas/changes are temporarily stored, or
what's the process behind this type of delta?

The following steps must be executed:


1. Check, whether the ALE changepointer are active in your source system (Transaction
BD61) and whether the number range is maintained (Transaction BDCP).
2. In addition, check in the ALE Customizing, whether all message types you need are
active (Transaction SALE -> Model and implement business processes -> Configure the
distribution of master data -> Set the replication of changed data -> Activate the change
pointer for each message type ).
3. Check, whether the number range for the message type BI_MSTYPE is maintained
(Transaction SNUM -> Entry 'BI_MSTYPE' -> Number range -> Intervals). The entry for
'No.' must be exactly '01'. In addition, the interval must start with 0000000001, and the
upper limit must be set to 0000009999.
4. Go to your BW system and restart the Admin. workbench.
All of the following activities occur in the InfoSource tree of the Admin. Workbench.
5. Carry out the function "Replicate DataSource" on the affected attached source system
for the InfoObject carrying the master data and texts.
4. Activate the X'fer structure
All changes, all initial data creations and deletions of records from now on are recorded
in the source system.
5. Create an InfoPackage for the source system. In the tabstrip 'update parameter', there
are three alternative extraction modes:
Full update
Delta update
Initialization of delta procedure
First, initialize the delta procedure and then carry out the delta update.
An update on this issue:
In the EMIGALL process, SAP decided to bypass all the standard proces to update the
delta queues on IS-U, because it would cause too much overhead during the migration. It
is still possible to modify the standard programs, but it is not recommended, except if you
want to crash you system.
The other options are as follows :
- Extract MD with full extractions using intervalls..
- modify the standard to put data in a custom table on which you are going to create a
generic delta;
- modify the standard to put the ALE pointers in a custom table and then use a copy of the
standard functions to extract these data....
- Extract the data you want in a flat file and load it in BW...
By the way, if you want to extract the data from IS-U, forget to do it during migration,
find another solution to extract after.

PS: Well if you have generic extractor and huge volume data then you can do it with
multiple INITS with RANGES as selection criteria and then a single DELTA(which is
summation of all INITS) in order to improve performance with GENERIC DELTA.

Error in transport
After creating a query in BEX, you try and save the query, it gives you the following
popup message:
"The query could not be saved due to a problem in transport".
Steps to correct the problem:
1. Within Adminstrator Workbench click on the Transport Connection tab in the
Navigation Window on the left
hand side.
2. Select the Request BEx button on the toolbar.
3. Create a transport.
4. Try to change the query again.

SAP BW ERROR : replicate data from source system


BW 2.1C
OS WINT4
The RFC is working fine and you are trying to replicate data from source system (4.0B),
while doing this you got an
error an ABAP dump, in this dump the Exception Condition are "CNTL_ERROR" raised
Take a look at OSS notes 158985 and 316243.
Depending on what patch level, GUI, or Kernel you are on.

Transport Process Chains to Test System


What is the best way to transport process chains to test system?
I got many other additional and unwanted objects got collected when I tried for
collection of process chains from transport connection.

To transport a process chain the best is to transport only objects created for the process
chain. On my system I created specific obejcts for PC : Infopackages, jobs, variant. those
objects are only use for PC. By this way I avoid errors when users restart load or job
manually.
So when I want to transport a process chain I go in the monitor and select the PC make a
grouping on only necessary objects, and I go through the tree generated to select only
what I need. Then I go in SE10 to check if the transport contains not other objects which
can impact my target system.
You can avoid some uncessary objects by clicking in Grouping > Data flow before &
Data Flow After . For example you already have infopackages in your target system but
not process chains & you only want to transport only process chain without any other
objects like transfer structure or infopackages . You can choose before or after option .
You can also choose hierachries or display option from the Display tab too if you have
objects in bulk but make sure all object are selected ( in case when different-2 process
chain having different kind of object then better use Hierarchy, not list )
While Creating these TR some objects may be in use or locked in other TR so first
release them by Tcode Se03 ,using unclock object ( Expert Tool ).
These options can reduce your effort while collecting your objects , even after so much
effort you get some warning or Error like :- objects are already in system then ask basis to
use overwrite mode.

Transport a specific infoobject


How to transport a specific info object? I tried to change it and then save but the
transport request won't appear. How to manually transport that object?
1. Administrator Workbench (RSA1), then Transport Connection
2. Object Types, then from the list of objects put the requested one on the right side of the
screen (drag & drop)
3. Click "Transport Objects", put the development class name and specify the transport
(or create the new one)
4. Transaction SE01, check transport and release it
5. Move the transport up to the another system.
If you change and reactivate the infoobject, but get no transport request, this means that
your infoobject is still in $tmp class.
go in to the maintenance of the infoobject, menu extras, object directory entry and change
the development class. at this point you should have a pop-up requesting a transport

request
If you're not getting a transport request when you change and activate, it could also be
that the InfoObject is already on an open transport.
When you collect the object in the transport connection as described above, you will see
in the right hand pane an entry called Transport Request. If there is an entry here, the
object is already on a transport and this gives you the transport number.
You can then use SE01 or SE10 to delete the object from the existing transport if that is
what you want to do then, when you change and activate the object again, you should be
prompted for a transport request. Alternatively, you can use the existing transport
depending on what else is on it.

Infocube Compression
I was dealing with the tab "compression" while managing the infocube, was able to
compress the infocube and send in the E- table but was unable to find the concrete
answer on the following isssues:
1. What is the exact scenario when we use compression?
2. What actually happens in the practical scenario when we do compression?
3. What are the advantages of compressing a infocube?
4. What are the disadvantages of compressing a infocube?
1. Compression creates a new cube that has consolidated and summed duplicate
information.
2. When you compress, BW does a group by on dimensions and a sum on measures... this
eliminates redundent
information.
3. Compressed infocubes require less storage space and are faster for retrieval of
information.
4. Once a cube is compressed, you cannot alter the information in it. This can be a big
problem if there
is an error in some of the data that has been compressed.
I understand the advantage to compressed the infocube is the performance. But I
have a doubt. If I compressed one or more request ID of my infocube the data it will
continue to appear in my reports (Analyzer)?

The data will always be there in the Infocube. The only thing that would be missing is the
request id's.. you can take a look in to your packet dimension and see that it would be
empty after you compress.
Compression yeap its for performance. But before doing this compression you should
keep in mind one thing very carefully.
1) If your cube is loading data with custom defined deltas you should check whether delta
is happening properly or not, procedure is compress some req and schedule the delta.
2) If your system having outbounds from cube and this happening with request ids then
you need to follow some other procedure because request ids wont be available after
compression.
These two things are very important when you go for compression.

How to Compress InfoCube Data


How Info cube compression is done?
v\:* {behavior:url(#default#VML);}o\:*
{behavior:url(#default#VML);}w\:* {behavior:url(#default#VML);}.shape
{behavior:url(#default#VML);}
Create aggregates for that infocube
----------------------------------------------------------------------------------I guess what the question was how we can compress a data inside a cube, I assume that's
usually done through by deleting the Request ID column value.
This can be done through Manage - > Compress Tab.
----------------------------------------------------------------------------------Go to RSA1
Under Modeling --> Choose InfoProvider --> InfoArea and then --> Select your
InfoCube
Right Click on your infocube --> from context menu --> choose Manage
Once you are in manager data Targets screen:
Find out the request numbers decide till what request id you want to compress
Go to Collapse tab under compress --> choose request ID and click Release
The selected request ID and anything below will be compressed.

What is happening behind the scene is After the compression, the F fact table contains
no more data.
Instead, the compressed data now appear in the E fact table.

Cube to Cube Load


You need to move some data from one cube to another.
The steps involved are :You need to first create 'Export Data Source' from original cube (right-click on the
InfoCube and select Generate Export Data Source).
Then, assign the new datasource to new cube. (you may click on 'Source system' and
select your BW server and click 'Replicate').
Then, you can configure your infosource, and infopackage.
Lastly, you are ready to load already.

How to build an extractor?


Using SAP BW 2.1C and soon 3.0A we have to extract data from R/3 FI.
Standard extractor cant be used, since data need to be enhanced and go throug
various routines before data contains all information needed in BW. Delta upload
mechanism is needed.
Where can information on
- building a new extractor (including delta mechanism)
- enhancing a standard extractor be found?
Technically, programming a function module data source would be a modification.
There's also a way that allows you to stay close to the standard, especially if you extract
data from standard tables:
In OLTP:
1) Find out which change document object points to the table.
2) Use the generic extractor to build a data source for _master data_ over the table.
3) Use the "enhance extract structure" feature in the BW img to add computed fields to
the data source
4) Use the user exit for extraction to program your enhancement logic and compute
additional fields
5) In data source maintenance, enter the change document object to delta-enable the data
source

In BW:
7) Create an infoobject with attributes. (Key figures can be attributes, too.) Use
concatenated keys or "Klammerung" to make the key fields identical to the key fields of
your source table.
8) Create infopackages for initial load and delta load, and load data.
Now you need to load the data from your infoobject into a data target for transactional
data (ODS or InfoCube).
In BW:
9) In infoobject maintenance, look up the view for master data for your new infoobject.
10) Use the generic extractor to create a generic datasource for transactional data out of
this view.
11) Load data from this second generic extractor into your Cube/ODS.
As far as I know, BW 3.0 will allow loading data straight from OLTP _master_data
datasources into BW ODS/Cubes, so as soon as you've upgraded, steps 7 - 11 will be
obsolete.

The query could not be saved due to a problem in


transport
Submitted by : Anil
Contact Email : ani_mannava@yahoo.com
After creating a query in BEX, you try and save the query, it gives you the following
popup message:
"The query could not be saved due to a problem in transport".
Steps to correct the problem:
1. Within Adminstrator Workbench click on the Transport Connection tab in the
Navigation Window on the left
hand side.
2. Select the Request BEx button on the toolbar.
3. Create a transport.
4. Try to change the query again.

ODS infosource not showing up

To display export infosources under the Infosource hierarchy you need to do the
following :
1. Right click the tree at the highest level (Infosources node) and then choose Insert Lost
Nodes.
2. Also make sure that under Settings under the menu options you have selected the
option to display generated
objects for ODS . By default the setting is set to hide objects.
And finally refresh your tree. You should be able to the infosources and the infopkgs.
They might show up under Unassigned Nodes if you have not transferred the BW app
component hierarchy in which case they show up under DM (Data Marts ) node.

Query View Workbook


You formatted a query view ( By changing Fonts/column width etc ) and saved as
workbook in your favorites in the Browser.
When you execute the workbook thru the browser you get the format that has been saved.
However, you loose the format when you click the Refresh Button.
How to set permanent settings so that formatting remains same as has been designed in
the workbook?
You can do this :In Excel using BW, you have what I categorize as 2 different types of cells.
You have "Excel" Cells and "BW" Cells. You can determine which you are in by
selecting a cell and right mouse clicking. If you get the BW options (Drilldown, Swap,
Insert.....) then you are in a BW Cell. If you get the standard (Cut, Copy, Paste) you are
in the Excel Cells.
If you are formatting an Excel Cell, Format using the option from the Excel Menu. If it is
a BW Cell, Format using the option from the BW Toolbar (5th button from right).
When you refresh you will retain your settings.
Also, as a hint, you don't have to select all of your BW cells when formatting. Select a
characteristic and format using the BW toolbar and all characteristics will format. Same
goes for key figures, Free characteristics etc.
BW uses the "BW Style Tool"to format. For more detail and customization follow the
below steps:

1. Download OSS Note #300570.


2. On your machine
Find / Execute SAPBEXS.XLA -> Click Enable Macros.

Stop a scheduled infopacket


-----Original Message----Subject: Stop a scheduled infopacket
I have tried using SM50 to stop a process that has been running for
quite sometime. How do I kill a scheduled infopacket.
Thanks,
-----Reply Message----Subject: RE: Stop a scheduled infopacket
Hello
select the Process and press F9. Or can kill the same from the OS level
using the UNIX command kill -9 <processno>.
Happy going..
-----Reply Message----Subject: RE: Stop a scheduled infopacket
Kill the job on the R/3 side by using SM37 for ALEREMOTE.
Hope this helps...
Thanks.

Delete unwanted Objects in QA system


I have deleted unwanted Update rules and InfoSources (that have already been
transported to QA system) in my DEV system. How do I get them out of my QA
system? I cannot find the deletions in any transports that I have created. Although
they could be buried somewhere. Any help would be appreciated.
I had the same problem with you. And I have been told there is a way to delete the
unwanted objects. You may request the Basis team to open up test box temporarily to
remove the obsolete Update rules and InfoSources. Remember to delete the request

created in test after you have removed the Update rules and InfoSources.
When I tried to delete the master data, get the following message"Lock NOT set for:
Deleting master data attributes". What I need to do in order to allow me can delete
the master data.
Since, technically, the master data tables are not locked via SAP locks but via a BWspecific locking mechanism, it may occur in certain situations, that a lock is retained after
the termination of one of the above transactions. This always happens if the monitor no
longer has control, for example in the case of a short dump. If the monitor gets the
control back after an update termination (regular case), it analyzes whether all update
processes (data packets) for a request have been updated or whether they have
terminated. If this is the case, the lock is removed.
Since the master data table lock is no SAP lock, this can neither be displayed nor deleted
via Transaction SM12. There is an overview transaction in the BW System, which can
display and delete all currently existing master data table locks. Via the button in the
monitor with the lock icon or via Transaction code RS12 you can branch to this
overview.
A maximum of two locks is possible for each basis characteristic:
Lock of the master data attribute tables
Lock of the text table
Changed by, Request number, Date and Time is displayed for every lock. Furthermore, a
flag in the overview shows whether locks have been created via master data maintenance
or master data deletion.
During a loading process the first update process starting to update data into the BW
System (several processes update may update in parallel for each data request), sets the
lock entry. All other processes only check whether they belong to the same data request.
The last process, which has either been updated or has terminated, causes the monitor to
trigger the deletion of the lock.

LO Cockpit Step By Step


Here is LO Cockpit Step By Step
LO EXTRACTION
- Go to Transaction LBWE (LO Customizing Cockpit)

1). Select Logistics Application


SD Sales BW
Extract Structures
2). Select the desired Extract Structure and deactivate it first.
3). Give the Transport Request number and continue
4). Click on `Maintenance' to maintain such Extract Structure
Select the fields of your choice and continue
Maintain DataSource if needed
5). Activate the extract structure
6). Give the Transport Request number and continue
- Next step is to Delete the setup tables
7). Go to T-Code SBIW
8). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Delete the content of Setup tables (T-Code LBWG)
vi. Select the application (01 Sales & Distribution) and Execute
- Now, Fill the Setup tables
9). Select Business Information Warehouse
i. Setting for Application-Specific Datasources
ii. Logistics
iii. Managing Extract Structures
iv. Initialization
v. Filling the Setup tables
vi. Application-Specific Setup of statistical data
vii. SD Sales Orders Perform Setup (T-Code OLI7BW)
Specify a Run Name and time and Date (put future date)
Execute
- Check the data in Setup tables at RSA3
- Replicate the DataSource

Use of setup tables:


You should fill the setup table in the R/3 system and extract the data to BW - the setup
tables is in SBIW - after that you can do delta extractions by initialize the extractor.
Full loads are always taken from the setup tables

You might also like