Some of the Critical Issues faced in Projects.
1) Unplanned system outages.
A) Sometimes the source systems are not available (outage) without any pre plan. It means source system not works suddenly. So all the loads which are running in BW will fail. Once the source system gets back we need to delete all the red requests and we need to repeat the IP's. This is very critical because we should respond very fast and we should repeat all the failed loads within a short span of time to complete the loads in time for next day report availability.
2) System crashed. 3) Attribute Change Run issue. 4) Yesterday's Trade report still running today. 5) Data Mart problem. 6) Lock waits, Deadlocks ( Db6cockpit, st05 )
Useful Transaction Code:
DB02 : Lock waits
RSA3 : Extract Checker
RSA7 : Delta Queue
RSMO : All Info packages monitor
RSPC : Process chain monitor
RSPC1 : To monitor a single chain
RSPCM : Process chains list added.
RSRQ : Individual request monitor
RSRT : Query monitor
SE11 : To view tables
SE37 : Function module execute
SE38 : ABAP programs execute
SM37 : Jobs Monitor
SM50 : Local process over view
SM51 : List of application servers
SM66 : Global process overview
ST04 : Lock waits
ST22 : Short dump analysis
ABC Project
Team
Ticket Analysis and resolution, Worked on Tivoli tickets.
Once the ticket is arrived in our queue we have to assign the ticket on our name. First we have to study the ticket like what is the priority of the ticket like P2 / P3 / P4?
To which system the ticket is related? For what error the ticket is raised? And then start working on the ticket. Once the problem got solved we have to close the ticket if the ticket is raised by production team. If it is raised by the client (or) market people we should take confirmation from the client before closing it.
We have a tool called Tivoli. Some of the process chains like critical chains were configured with this tool. This tool will always monitor the configured chains once they are started running. If any chain got failed or running for a long time automatically this tool will raise a ticket and assign it to our team. And it will send a mail to our resolution team.
Communicating with the customer who posted the ticket.
If we are working on a particular ticket raised by the customer / client, If we need any clarification we should write a detailed mail to the client once after the confirmation from the client we will proceed to work on the ticket.
For example, If the ticket is related to selective deletion of data from production systems (or) Deletion of complete data from a target and reload the data (or) Initial / full loads for a delta target...etc.
Created Adhoc Info Packages and scheduled data into the data targets based on the RM’s.
We have some tickets called adhoc tickets. These are raised by the customer / Client to do particular analysis in SEM systems. We have to load some particular data using flat file system. Client will provide the flat files. Generally we will work on this types of tickets on weekends after the daily loads got finished because we have less loads on weekends.
Setup the Statistical Setup for the LO-Cockpit and used the V3 Update for Delta Management.
First de schedule the chain, Delete the data from the target, Clear the delta Queue (RSA7), Fill the setup tables (OLI*BW), Run initial load Info Package, Setup the delta mechanism (LBWQ) Direct Delta / Serialized / Un serialized / Queued Delta, Run the delta Info Package Fill the setup tables (OLI*BW)
As an TEAMmember worked extensively in solving P2, P3, P4 tickets for all the GC’s in ABC.
I worked on P2 / P3 /P4 tickets
P2 Ticket:- its a high priority ticket. We have to update the ticket for every 30 minutes with the latest updates and we have to resolve the ticket within 4 hours if not we need to inform client with a detailed mail and assign this ticket to onsite team. For example Long running IP’s because huge volume of data (or) Multi meta chains (or) basis related issues (or) Critical Chains like Zone related (all the markets will effect for this chain) (or) Master data related issues in the main chains…etc
I worked on P2 / P3 /P4 tickets
P2 Ticket:- its a high priority ticket. We have to update the ticket for every 30 minutes with the latest updates and we have to resolve the ticket within 4 hours if not we need to inform client with a detailed mail and assign this ticket to onsite team. For example Long running IP’s because huge volume of data (or) Multi meta chains (or) basis related issues (or) Critical Chains like Zone related (all the markets will effect for this chain) (or) Master data related issues in the main chains…etc
P3 Ticket:- its a second priority ticket. We have to update the ticket for every 2 hours with the latest updates and we have to resolve the ticket within 8 to 12 hours if not we need to inform client with a detailed mail and assign this ticket to onsite team. For example Data load failures in non critical chains like market related chains (it will effect for that particular markets only) (or) not critical reports chains (or) Pre production related issues...etc
P4 Ticket:- it’s a low priority ticket. We have to update the ticket once in a day with the latest updates and we have to resolve the ticket within one week if not we need to inform client with a detailed mail and assign this ticket to onsite team. For example Chains related to quality and regression systems (or) Non dependable chains…etc
Assisting junior team members in error resolution during data load and any other technical issues.
As a senior team member helped to ELPT’s while monitoring the loads, facing any issues related to the loads and helped them in solving the issues.
Filling of setup tables based on selection criteria and load a full repair to the BW side if there is any inconsistence.
When we found data inconsistency in a target (ie) some data got missed (or) not loaded then we go for full repair loads.
As a senior team member helped to ELPT’s while monitoring the loads, facing any issues related to the loads and helped them in solving the issues.
Filling of setup tables based on selection criteria and load a full repair to the BW side if there is any inconsistence.
When we found data inconsistency in a target (ie) some data got missed (or) not loaded then we go for full repair loads.
First we need to fill the setup tables using the selection criteria in R3 side. Then run the Full repair Info package in BW.
To Setup Full Repair IP
Go to info package * Go to Scheduler from toolbar * Click on Repair Full Request
Then check the check box * Indicate Request as a Repair request.
Go to Update Tab * Select Full Update * Schedule the load.
To Setup Full Repair IP
Go to info package * Go to Scheduler from toolbar * Click on Repair Full Request
Then check the check box * Indicate Request as a Repair request.
Go to Update Tab * Select Full Update * Schedule the load.
Production / Monitoring Team
Process Chains Monitoring Using RSPC, RSPCM & RSPC1 Transactions. process chains monitoring of Daily / Weekly / Monthly loads. RSPC : It will display the total list of process chains available in the system. Select a process chain to monitoring. RSPCM: It is used to monitor the list of process chains which are already added into the list. (In RSPCM Status / Chain / Date / Time / Name / Log ID etc..) RSPC1 : It is used to monitor a single process chain but we have to provide a chain name (or) log id and execute it then it will directly go to the particular chain.
(Process Chain ID / Log ID )
* Attribute Change Run Monitoring.
(Process Chain ID / Log ID )
* Attribute Change Run Monitoring.
- RSA1 --> Tools ---> Select Apply Hierarchy Attribute ---> click on monitor…
- Individual Request monitor for error handling.
- Using RSRQ transaction monitored individual request monitor.
- Jobs monitoring using SM37 transaction code.
- Shortdump analysis using ST22 & raised tickets when required.
Errors in SAP BW
Short Dump Problems: (ST22)
1) TSV-T New_Page_Allocation_Fail / Time Out Error / No Role Memory
A) This short dumps will occur because of less cache memory. Solution is: Decrease the datapacket size & re-run the IP. If it fails again assign this ticket to basis team & tell them to increase the cache memory once its done then re-run the IP.
A) This short dumps will occur because of less cache memory. Solution is: Decrease the datapacket size & re-run the IP. If it fails again assign this ticket to basis team & tell them to increase the cache memory once its done then re-run the IP.
2) RSQL-SQL Error.
A) This type of short dumps errors will occur because of other chain is deleting the data in the same PSA table. Solution is: Wait till PSA deletion completes once its done proceed with the loading once after deleting the red request.
3) ITAB Duplicate Key.
A)This error will come because, while loading transaction data a lookup will check the master data is exist or not (is loaded or not). ITAB is an internal table while inserting master data records in the internal table for example 0Material, duplicate key values were found it means like M-version (modified version) & A-version (Active version). Solution ( RSA1 * Tools * Apply Hierarchy attribute change run * Select 0Material & execute the change is : Run the change run for 0material run. * once change run completed then re-run the IP.
4) Gateway not assigned.
A) This is a short dump issue. It will occur because If a field mismatch between R3 & BW (or) If source system transfer structure is not active (or) IDOC’s blocked etc. Solution is: First identify the field which is mismatch & reactivate the data source & replicate the data source in BW and we need to repeat the IP.
To Activate Transfer Structure Go to SE38 and execute the program RS_TRANSTRU_ACTIVATE_ALL and give the source system name and infosource name and execute it.
To activate Data source * Go to SE38 * Run the program * RSDS_DATASOURCE_ACTIVATE_ALL
To activate All Source Systems * Go to SE38 * Run the program * RSAR_LOGICAL_SYSTEMS_ACTIVATE
To activate Update Rules * Go to SE38 * Run the program * RSAU_UPDR_REACTIVATE_ALL
To activate Data source * Go to SE38 * Run the program * RSDS_DATASOURCE_ACTIVATE_ALL
To activate All Source Systems * Go to SE38 * Run the program * RSAR_LOGICAL_SYSTEMS_ACTIVATE
To activate Update Rules * Go to SE38 * Run the program * RSAU_UPDR_REACTIVATE_ALL
ODS Activation Problems:
5) SID issue: A) Load the master data and repeat the ODS activation.
6) Request have different Aggregational behavior:
A) Because one request is having addition functionality and another request is having overwrite functionality in update rule. Solution for this error is: Activate the ODS requests one after another one by one.
7) Key Value exists in duplicate.
A) Because in ODS settings the unique key is checked. If it is not checked it will check all the active records whether its there or not and it will take log time to activate. Solution: Set unique unchecked and activate the ODS. We have a internal program to uncheck the ODS unique status. Otherwise we have to raise a ticket to development team to change the unique key status temporarily.
8) ODS Built incorrectly.
A) This error will come because of missing of delta request. Solution is: Identify the request using the request number (RSRT give the request number and execute) and change the status to red or if the request is in the PSA then load it from PSA then it will solve the problem.
9) Error with status 5.
A) RFC connection related issue. Solution: Check the RFC Connection if its fine then try to repeat the activation if not then raise a ticket to Basis team.
10) Lock not set for : The ODS activation is failed because there is a load happening from the DSO to further data target resulting towards the ODS lock or there may be locks in SM12 and we need to delete and re-run the activation process.
Master Data Issues:
10) Lock Issues.
A) This issue comes because some other loads (or) change run are running on same master data. Repeat once the change run got finished.
11) Invalid Characteristics.
A) Delete the request from target and edit the data in PSA and reload it from PSA to target. And raise a ticket and reassign the ticket to data quality team to correct the data in R3 side permanently.
12) Alpha Confirming Value.
A) Because of the length in info object level. Solution: First check the info object level whether the alpha conversion checkbox is checked or not (at transfer structure level) if not check the check box in transfer structure level and repeat the IP.
13) Update Mode R.
A) Because of datasource is not support for repeat delta. Solution: Delete the request & run initial with data transfer and run the delta load. For example in COPA data source is not support for repeat delta.
COPA (Time Stamp Reallign): Delete the request from target * Now we have to delete the timestamp In R3 system. Goto SE16 * give the table name TKEBWTS * Execute it * Copy the request for High time stamp (TS_High) of previous request. * Go to KEB5 * Paste the high timestamp in the field called Direct Entry and give the Data Source Name and execute it. Then the timestamp will delete from the table. * Goto BW and repeat the delta load in BW.
Find the timestamp details for a COPA data source is:
Go to * KEB5 * give the datasource and Execute it. It will give the time stamp details by request wise.
Time Stamp Error / Data Has to be Replicated / Invalidate in the source system / Transfer structure in inactive.Because of some changes happened in the R3 side and is not replicated in the BW side. Solution: Replicate the data source and activate the Data source and re run the IP. (See Top of this page 11th line to activate Transfer structure).
COPA (Time Stamp Reallign): Delete the request from target * Now we have to delete the timestamp In R3 system. Goto SE16 * give the table name TKEBWTS * Execute it * Copy the request for High time stamp (TS_High) of previous request. * Go to KEB5 * Paste the high timestamp in the field called Direct Entry and give the Data Source Name and execute it. Then the timestamp will delete from the table. * Goto BW and repeat the delta load in BW.
Find the timestamp details for a COPA data source is:
Go to * KEB5 * give the datasource and Execute it. It will give the time stamp details by request wise.
Time Stamp Error / Data Has to be Replicated / Invalidate in the source system / Transfer structure in inactive.Because of some changes happened in the R3 side and is not replicated in the BW side. Solution: Replicate the data source and activate the Data source and re run the IP. (See Top of this page 11th line to activate Transfer structure).
Generic Extraction:
When the Business Content Datasources does not match with our requirement we go for Generic Extraction to generate a Datasource.
Steps : Go to SE11 to create a view if the data is in different tables. * Create a view using the given database tables & fields by using join conditions as per the requirement and activate the view. * Go to RSO2 and give Datasource name for Trans Data/ Master Data / texts and click on Create button * Select the Application Component (SD / MM / PP) * Select the View or Table Name * Customize the Extract Structure by using the option Select, Hide, Inverse, Field Only and (Save it) Generate the Data Source * Check the data in the in the Extract Checker by giving the DataSource name in [RSA3]. * Transport the data source [RSA6] and replicate it into BW.
Generic Delta Management: Based on Time Stamp, 0CalDay, Numeric Point
Steps : Go to SE11 to create a view if the data is in different tables. * Create a view using the given database tables & fields by using join conditions as per the requirement and activate the view. * Go to RSO2 and give Datasource name for Trans Data/ Master Data / texts and click on Create button * Select the Application Component (SD / MM / PP) * Select the View or Table Name * Customize the Extract Structure by using the option Select, Hide, Inverse, Field Only and (Save it) Generate the Data Source * Check the data in the in the Extract Checker by giving the DataSource name in [RSA3]. * Transport the data source [RSA6] and replicate it into BW.
Generic Delta Management: Based on Time Stamp, 0CalDay, Numeric Point
Data Source Enhancement Steps
R3-Side: * Select the DataSource in RSA5 and Transfer it. (If it’s a new. Or go to RSA6 and proceed ). * Select the DataSource in RSA6 and open the Extract Structure (Double Click on it). * Click on Enhance/Append Extract Structure to Append a field. * It will ask for the name and properties of the field. Give name & properties (Data Type, Length...etc). * Activate the DataSource. * Go to Post processing of the DataSource (RSA6) and edit the DataSource (select the DataSource & click Edit). * Check the for the Appended field whether it’s added. * By default “Hide Only” check box will be ticked. Remove the check box and check “Field Only” check box. * Activate the DataSource. * Go the transaction CMOD create a PROJECT (give a project name and create) * Go to Components and give “RSAP0001” in the component (it’s a SAP Enhancement Component)
* We have 4 User Exit in “RSAP0001”
* We have 4 User Exit in “RSAP0001”
1) Exit_SAPLRSAP_001 (Transaction Data)
2) Exit_SAPLRSAP_002 (Attribute)
3) Exit_SAPLRSAP_003 (Text)
4) Exit_SAPLRSAP_004 (Hierarchy)
2) Exit_SAPLRSAP_002 (Attribute)
3) Exit_SAPLRSAP_003 (Text)
4) Exit_SAPLRSAP_004 (Hierarchy)
* Select the User Exit based on your DataSource (Trans Data / Attribute / Text / Hierarchy). * Double click on the selected User Exit. * Then something will come like “ZXRSRU...01” it a include program we have to write code in the program. * Click on the INSERT CODE and write the code as per the requirement ( From which table data is populating for the enhanced field, declare tables, declare variables, DataTypes….etc) * Save and Activate the code. * Go to BW and Replicate the data source.
Cube Enhancement BW Side:
* Create a new Info Object and add the field in the cube. (If it is BW3.5 delete the total data in the cube) * Go to the InfoCube Re-Modeling and create a Re-Modeling rule. * Add the Info Object (ADD Characteristics / Add KeyFigure / Replace Characteristics / Replace KeyFigure) * Assign it to a Dimension and Activate the Info Cube. * Go to Transformation & map the new Info Object to the newly created field in the DataSource. * Create DTP & Load the Data. * From next load onwards data will be loaded for the newly added InfObject in the Info Cube.
------------------------------------------------------------------------------------------------------
LO’s Step By Step
R3* Activate the datasource which is available in delivered version [RSA5] * Maintain / Generate extract structure [LBWE] * Maintain datasource [LBWE] * Activate the datasource / Extract structure [LBWE] * Transport the datasource [RSA6] BW* Replicate the datasource in bw [RSA1] * Assign infosource [RSA1] * Maintain communication structure & Transfer Rules * Maintain Infocube & Update Rules. R3 * Run statistical setup to fill the data into setup tables [OLI*BW] * Check the data in extract checker [RSA3]
BW* Create Infopackage and schedule initial load
R3 * Delete setup tables [LBWG] * Setup periodic V3 update [LBWE]
BW* Schedule Infopackage for delta load [RSA1]
BW* Create Infopackage and schedule initial load
R3 * Delete setup tables [LBWG] * Setup periodic V3 update [LBWE]
BW* Schedule Infopackage for delta load [RSA1]
------------------------------------------------------------------------------------------------------
V1 Update [LBW1]: V1 is a Synchronous update. If we send a record it will give response “received 1 record” BW will give response for all the records like this, so it is very slow.
V1 Update [LBW1]: V1 is a Synchronous update. If we send a record it will give response “received 1 record” BW will give response for all the records like this, so it is very slow.
V2 Update [LBW1]: V2 is an Asynchronous update. If we send a record it won’t reply like V1 update. It is fast compare to V1 generally we use V2 update.
V3 Update [LBWE]: V3 is an Asynchronous background schedule in R3. Once the documents collected into the update queue [SM13] we run V3 job. Its nothing but move records from update queue to delta queue. [Go to LBWE * Go to Job Control * Setup for Periodic update] Generally we run for every 2 hours depending on the records.
Fill Setup Tables Why: We keep data in setup tables because we have lot of data in the base tables but we require the extract structure similar data, So we keep only extract structure similar data in setup tables. We need to run statistical setup to fill the setup tables [OLI*BW]. Directly access the data from base tables is not possible when application running (that means user posting records or modified any existing records).
Delete Setup tables Why: After the initial delta load we will delete setup tables data because the data is clogging in the lo-cockpit may cause the data.
LUW’s: Logical Unit of Work. When application running it producing documents it is nothing but LUW. Application running means creating a new document or changing an existing document.
Delta Queue [RSA7]: It is a table in R3 and it contains all modified & newly added records. It is used to implement delta update. Once after the V3 update all records in the update queue are moved to delta queue. Then we can schedule delta load. After successful run of delta load the delta queue will become empty (0 records)
(Only after successful run of initial load, we find delta queue flag (datasource name) in RSA7).
Update Queue [SM13]: It is a table in R3. It contains all the modified & newly added records. When application runs means any new records created (or) modified any existing records, then V2 job will run and put all the records in this update queue.
------------------------------------------------------------------------------------------------------
Customized Reports
On Sales Overview Cube [0SD_C03]
1) Sales trend Analysis: It a report on Sales trend analysis on a particular material. In this report while executing the report it will ask for 0CalMonth. After entering 0CalMonth it will shows the sales trend analysis for the previous 3 months. (For example if I enter Apr2006 It will shows Jan2006, Feb2006, Mar2006 sales trend analysis)
* To generate the Sales Trend Analysis for the previous months we used
Variable Offsets & Restricted KeyFigures in this report.
* We used Variable Offsets on 0DocumentClass & 0DebitCredit
* Variable Offset * -1 for Previous month, -2 for two months prior…
* Variable Offset * +1 for next month, +2 for next two months …
* 0DocumentClass is a Characteristic and it will maintain the value [O *
OrderValue] and
* 0DebitCredit is a Characteristic and it will maintain the values [C *
Negative Sales Documents(Credit), D* Positive Sales Document (Debit)]
* Finally we have to generate the Incoming Order Value of the Material.
* Incoming order value (KeyFig) = Restrict (Netvalue in statistical currency) based on 0DocumentClass & 0DebitCredit.
On Sales Overview Cube [0SD_C03]
2) Delivery Delay Analysis: To find actual quantity delivered to the customer. For example customer ordered for some 1000 quantity. Now we have to calculate Fulfillment (CalKeyFig) of the given order based on Incoming Order Qty and Open Order Qty.
* Fulfillment (CalKeyFig) = Incoming Order Qty (ResKeyFig) – Open Order Qty (KeyFig) * Incoming Order Qty (ResKeyFig) = Qty in Unit of Measure (KeyFig) based on 0DocumentClass, 0DebitCredit.
* 0DocumentClass is a Characteristic and it will maintain the value [O* OrderValue] and * 0DebitCredit is a Characteristic and it will maintain the values [C * Negative Sales Documents(Credit), D* Positive Sales Document (Debit)] * Finally we have to Calculate the actual Qty Delivered to the Customer.
* Fulfillment (CalKeyFig) = Incoming Order Qty (ResKeyFig) – Open Order Qty (KeyFig) * Incoming Order Qty (ResKeyFig) = Qty in Unit of Measure (KeyFig) based on 0DocumentClass, 0DebitCredit.
* 0DocumentClass is a Characteristic and it will maintain the value [O* OrderValue] and * 0DebitCredit is a Characteristic and it will maintain the values [C * Negative Sales Documents(Credit), D* Positive Sales Document (Debit)] * Finally we have to Calculate the actual Qty Delivered to the Customer.
-----------------------------------------------------------------------------------------------------
Bex Query Designer
Bex Query Designer
In 3. x version Bex query designer contains 5 blocks. But in 7.0 Bex query designer contains 8 blocks. Message and properties and Filters are newly added.
-----------------------------------------------------------------------------------------------------
Bex Analyzer
Bex Analyzer
In 7.0 versions Bex analyzer is upgraded version with more navigational and formatting buttons. In 3.x version Bex analyzer contains two tool boxes. Analysis tool box, which contains 8 buttons and Bex explorer tool box, which contains 9 buttons. But in 7.0 Bex explorer tool box called as Bex design tool box which contains, completely changed with 13 navigational and formatting buttons [Design mode, Analysis grid, Navigation Pane, Drop down box, Check box, Radio Buttons, Messages, Work Books & settings]
-----------------------------------------------------------------------------------------------------
Bex web analyzer
Bex web analyzer
Bex web analyzer is newly added reporting tool in Bi suite. Bex web analyzer is developed for Business experts by using drag & drop or wizard based conditions & exceptions to generate ad-hoc reports. When analysis complete you can saves the results in a folder for accessible to other departments and results can be broad cast for future use.
-----------------------------------------------------------------------------------------------------
Bex Report Designer:
Bex Report Designer:
Bex report designer is a visual tool. You can make reports more accurate by adding headers and footers. To do more lay out design, we go for Bex report designer by cell level, column level or row level. With comparing 3.x excel sheets, in Bi 7.0 report designer contains, more layout design options, but comparing with WAD it has less options.
-----------------------------------------------------------------------------------------------------
WAD:
WAD:
WAD allows you to develop better web reports with enhanced navigation and analysis futures. Main advantage any source of data can be used as data provider .When you open WAD work area it contains 3 tap pages. Layout, XHTML and Overview. In 3.x version BW uses HTML code. But, In 7.0 version Bi uses XHTML code. In 3.x version we have 23 buttons as one group. But In 7.0 versions they divided in to 4 groups. Button group, Container layout & Property pane
-----------------------------------------------------------------------------------------------------
Calculated key figure:
Calculated key figure:
To calculate a new key figure or to create new key figure, based on existing key figure at info provider level we go for calculated key figure & we use some mathematical functions in calculations. Any thing CKF or RKF, which is creating info provider level is global, Calculated key figure is global, It is re-usable component in that info provider level, The calculation you do by using CKF, that applies for that entire column. Ex: In data target we have order quantity & delivery quantity. If client wants to see open quantity, which is not available in info provider, then we create calculated key figure [Order Quantity – Delivery Quantity = Open quantity]
-----------------------------------------------------------------------------------------------------
Formula:
Formula:
Formula is local. Formula holds the same functionality of a calculated key figure. It is not a re-Usable component. It is specific to that particular query only. Normally we use new formula in stricture elements. New formula contains only 6 functions: 3 are % functions & 3 are data functions.
-----------------------------------------------------------------------------------------------------
Restricted key figures:
Restricted key figures:
When you want to restrict key figure based on one or more characteristics values with respective row or column, we go for restricted key figure. By using RKF you can focus on certain values of a query .Restricted key figures are global & basic key figures of an info provider. It is re-usable component for all queries in that particular info provider.
Ex: Normally we use RKF, to compare two key figure values from different columns. One of these columns, restricted by RKF. In info provider, take Region as a characteristic, EU sales in one column and NA sales in one column. One of these columns will be restricted by characteristics in query result out put for comparisons.
Ex: Normally we use RKF, to compare two key figure values from different columns. One of these columns, restricted by RKF. In info provider, take Region as a characteristic, EU sales in one column and NA sales in one column. One of these columns will be restricted by characteristics in query result out put for comparisons.
-----------------------------------------------------------------------------------------------------
Filters:
Filters:
By using filter, we can restrict values of info object during run time of a query. When restriction is done with filters, it affects the whole that query. When restricting is done with restricted key figure it affects only for that column. Ex: In info provider, we have values of north, south, east & west values. If you put a filter on north values by using inclusive, we can see only north value. Other values will not show in result out put. If you use a exclusive other values will shown except filtered value.
-----------------------------------------------------------------------------------------------------
Free characteristics:
Free characteristics:
Based on requirement, we keep characteristic as a free characteristic for to drill down, in result out put. If you keep characteristic as a free characteristic and once you execute a query. It will not show in column but it is available in navigation block. If client wants to see the result of that particular info object which is kept in free characteristic, Right click on next to free characteristic cell, then result will be show in query.
-----------------------------------------------------------------------------------------------------
Conditions:
Cell definitions:
When we want to build restriction for each cell property, we use cell definition. Cell definition can be built only when 2 structures are used in report.
Variables:
Conditions:
When we want to restrict the output of a query based on values of key figure we go for conditions. By defining conditions, we can analyze query results in more details. Condition will affect the output of query. All the conditions, will play depending on the active check box selected with and/or. Filters affects the out put of query based on values of query. Ex: If client wants to see top 10 customers or best selling products or vise versa out of 100 customers or out of sales areas, we go conditions.
-----------------------------------------------------------------------------------------------------
Exceptions:
Exceptions:
To provide alerts or highlight the, certain key figure values in query we use exceptions. If client wants to particular key figure values for analysis purpose or to demonstrate purpose, we use exceptions to that particular key figure inn color or in highlighted mode. Ex: We can build exceptions only on those key figures, which are acting as structure elements. If exceptions are used, it de grades the performance of a query, hence it is used only critical reports like profitability analysis only on result.
-----------------------------------------------------------------------------------------------------
Structures:
Structures:
Structures are reusable components. We use structure to provide level up and level down affect. Structures are two types: local & global. Structures which saved in info provider level called global structure. .Strictures must contain key figure. When structure is saved it appears in left block, so that, it can be re used of ‘n’ number of queries.
-----------------------------------------------------------------------------------------------------
Cell definitions:
When we want to build restriction for each cell property, we use cell definition. Cell definition can be built only when 2 structures are used in report.
-----------------------------------------------------------------------------------------------------
New Selection:
New Selection:
New selection is local. It holds same functionality of restricted key figure. It is specific to that particular query only. It is not re-usable. Normally we use New selection is built on structures.
-----------------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------------
Variables:
Variables Types: Characteristic, Text, Formula variable, Hierarchy variable and Hierarchy Node variable. Processing Types: User Entry or Default, Customer Exit, Sap Exit, Replacement Path and Authorizations.
Characteristic variable with user entry default
User enters the value or default values are fixed. Main purpose of this variable is to make the query as dynamic or to add parametric values. UN like fixed values, variables give the option of specifying values dynamically during the query run time.
Replacement Path:
When you want variable to be replaced with the query of some other query, we create characteristic variable with re-placement path.
-----------------------------------------------------------------------------------------------------
COPA Extraction:
CUBE: 0CC_C11 Cost and Allocation
CUBE: 0CC_C11 Cost and Allocation
0CO_OM_CCA1: Data source loads plan data with full upload by monthly, 0CO_OM_CCA9: Data source supports loads actual line item data with delta loads
Tables: COKP- This table feeds data to data sources controlling line item data.
This cube uses 2 more tables BWOM2 for TIMEST and BWOM2 for SAFETY DELTA
Tables: COKP- This table feeds data to data sources controlling line item data.
This cube uses 2 more tables BWOM2 for TIMEST and BWOM2 for SAFETY DELTA
Characteristics
Company code, Controlling area, Group currency, Operating concern currency, Debit currency, Global company currency, Credit for no of suppliers, Chart of accounts.
Key figures
0material type, 0amount
Time char:
0physical variant, 0physical period, 0posting period
0CO_MO_CCA9:
Is used to extract line item data and supports delta mechanism by using time stamp. Because of the delta mechanism this data source runs by time stamp would result in missing of delta documents.
To overcome of this problem we can specify the time stamp in the table BWOM2_V_SAFETY. But this could result in the data source bring duplicate records. To overcome this problem we extracted actual data by secluding the daily full update info package with the data selections as current period with the by deleting overlapping.
0CO_OM_CCA1:
Is used to extract planed data, target, budgets, commitments & forecasting. This data source does not support delta. We extracted data monthly by using full upload with the data selections as from first period to lasts period by deleting overlapping request. Planning is done by monthly, first business day
-----------------------------------------------------------------------------------------------------
0CO_MO_CCA9:
Is used to extract line item data and supports delta mechanism by using time stamp. Because of the delta mechanism this data source runs by time stamp would result in missing of delta documents.
To overcome of this problem we can specify the time stamp in the table BWOM2_V_SAFETY. But this could result in the data source bring duplicate records. To overcome this problem we extracted actual data by secluding the daily full update info package with the data selections as current period with the by deleting overlapping.
0CO_OM_CCA1:
Is used to extract planed data, target, budgets, commitments & forecasting. This data source does not support delta. We extracted data monthly by using full upload with the data selections as from first period to lasts period by deleting overlapping request. Planning is done by monthly, first business day
-----------------------------------------------------------------------------------------------------
BI7 Concepts
Data Migration: Before Migrating the Datasource from BW3.x to BI7 check all the Transfer Rules & Update Rules that contains any Routines. If no Routines then transfer the Datasource. If We have Routines backup them and once after migration we need to create Routines once again. The table RSDSEXPORT will store the Migrated Datasource. Use RSDS transaction code for Recovery.
How to Migrate the Data source: Right Click on the 3.x Datasource and click Migration, We will get Two options With export & Without export, If you select Without Export, Transfer Rules and Update Rules will be deleted it means export structure back to 3.x is not possible (it will not store in RSDSEXPORT table), If we select With Export, Transfer & Update Rules will be stored in RSDS Export table. Here export structure back is possible. Generally we will prefer With Export. Once after the Migration the Datasource will be converted to BI7.0 version. Then select the migrated Datasource & create Transformations, DTP and schedule it.
Up Gradation: Up gradation process is always done through system landscape. All BW objects should be in active version. We have to stop all the V3 jobs in SAP R3. Reconcile all the boxes. We have to stop the scheduling of all info packages. Basis person will upgrade the box. We can see the patches applied SPAM. We will be involved in Regression testing. Check GUI. Check the extractors. Scheduling the V3 jobs scheduling the info packages
Standard DSO:
Write Optimized DSO: When ever you have multiple data loads or Mass data loads we go for Write optimized DSO. It contains only one active data table. In settings we have only 2 settings with compare standard DSO we have 6 settings. In write optimized DSO we set type of DSO and don’t check uniqueness of records. If check this check box, it will allows the duplicate of records. It supports DTP functionality and reporting is possible. Here semantic keys fields act as a primary keys and data fields act as non key columns [Key figures]
DTP (Data Transfer Process): DTP is a part of data flow control in BI. Enterprise data ware house may contain many layers. Now in Bi, info packages are loads data from the source system to data source/PSA only. From there/Data source, we define DTP process to move data from data source to data targets. In another scenario, we defined DTP for Open hub destinations to retract data from Bi system to flat file destination. DTP contains 3 tap pages. Extraction, Update & Execute.
DTP Advantages: Multiple delta mechanisms to different data targets, With filter option, It picks data based on selection only, Improved performance in data loading scenarios & Error handling in Data source level by using error DTP. DTP works on Full or Delta, It does not support Init. DTP delta mechanism works based on the request number in PSA.
Direct Access DTP: Making use of virtual providers of Master and Transactional data. By using Direct access DTP, we enable link between Remote data without utilizing objects of architecture. So that, no need to disturb system processing loads. In 3.x version, we can access data from any SAP remote applications only. But in Bi 7.0 by using Direct access DTP we can extract data from any applications. In 3.x version, we don’t have facility to view the contents of remote cubes. But in 7.0 we have flexibility of displaying data through Direct access DTP
Error DTP
Data is loaded via info package from source system to PSA table. There is no error handling available for info package. DTP contains 3 Tab pages. Extraction, update and Execute. In up date Tab page if you expand Error handling, we will get 4 options. De activate, No update No reporting, Valid records update - No reporting [Request Red], Valid records update - Reporting possible [Request Green] By default 3 options will be enabled to maintain Error stack. Main advantage is it keeps the sequence of records for consistence in error handling.
Virtual Providers: Virtual providers are Multi providers and info sets. Info cube points of view, virtual providers are 3 types. Based on DTP, Based on BAPI & based on Function module are virtual providers. Virtual Providers are specially designed for planning data manually or changing data using planning functions.
Real Time Data Acquisition: Real time means, as soon as it available in transactional system, which ever data is available for reporting, which is very near for reporting is called Real time data. When ever reporting requirement is with in one hour, to provide operational information and to support Tactical decisions, we go for real time data acquisition.
Real time cube: Integrated planning is based on a real-time Info Provider. Real-time Info Provider is the new name for a transactional Info Provider. Planning is done on Real time cube, this cube supports either loading or planning. Real-time Info Cube can be loaded with information; no plans allowed. Real-time Info Cube can be planned; No data loading allowed
Re-Modeling: In 3.x version, when you want to do remodel, the cube must be empty. And we don’t have facility of Remodel rule. But in 7.0 version, even if cube contains data, we can do remodel, with disturbing the cube data. Remodel rule applicable for Info cube only, not DSO. Sap recommends before remodeling, take back up the data is thumb rule. Once you do remodel, the transformation will be inactive, make it active. Re-Modeling is 6 types: Adding, deleting & replacing a character, and Adding, deleting & replacing a Key figure. RSMRT is T.code. Mode of Loading for characteristics are: Constant, Attribute of characteristic, 1:1 mapping, Customer exit, Mode of loading for key figure: Constant & User exit
Re-Partitioning: Even, in 7.0 versions, if you want to do initial partition, cube must be empty. In 3.x version we don’t have facility to do Re-partition again. In 7.0 versions even if cube contains data, we can do re partition again with out disturbing the cube data. Sap recommends before remodeling, take back up the data is thumb rule. R/c on info cube, Select additional functions & Select Re-partition, We get 3 options: Adding partitions, Merging partitions & complete partitions. Enter cube name, Click on initialize, Enter partitioned months, Click on Ok and click on immediate, Click on save, Click on monitor & refresh monitor, . SE14 is T.code to check partition.
Demon: Demon is a system or it is a process is used to initiate and control the data transfer in push & full scenarios. Demon runs at regular intervals. By using T code RSRDA, if set periodic time at Real time monitor as 5 minutes, it runs based on setting time & fulfills the task of data loads. RDA demon monitor provides an over view on the status of 2 processes: demon attached of Info package for RDA and demon attached DTP process for RDA. It shows 4 statuses: Demon active and running, Demon not active, Demon has an error and Demon is stopped
Data slices: You use data slices to protect whole areas of data against changes. It is against to filter process. In IP at info provider Tab page at bottom we find this Data slice Tab button.
Input-Ready Query: In put ready query is queries are defined on an Info Provider at aggregation level and they can be used for manual planning. Here we provide an option for end user to enter values manually for forecasting purpose.
Planning Functions
Copy, Copy from actual to plan
Repost, Reposting revenue by Region wise change, Revaluation, Revaluate planned sales by %,
Distribution by Keys, Dividing in to several factors
Delete: Delete the data in planned version
Distribution with reference data, Distribute supplies to products
Formula [extended Formula FOX] Mathematical functions to calc. plane data
Unit conversion Convert units of key figures.
Currency translation, Translate currency EUR to USD
Open Hub destination:
In my previous project client want retract data from EU cube, by using open hub destination. We get from template types: Data source, Info source, DSO, Info cube and info object to destination types are: data base table. File [Flat file & application server] & Third party tool, Crete Transformations and Create DTP and retract data
Newly added Process Types:
Total 43, in 43 16 are newly added. General: Interrupt Process, Between Multiple Alternatives, Workflow Remote also, is process chain still active. load & subsequent process: Data transfer process, Trigger event by Broad caster, Close request of info package, Data target administration: Initial & filling, Archive data from Info provider, Delete entire contents of Transactional DSO, Other BW Process: Delete request from change log, Executive planning in sequence, Switch real-time to planning mode & switch real time to loading mode, Retail: send POS sales data into XI system, Other: last customer contact update.
***************************************
Open Hub destination:
In my previous project client want retract data from EU cube, by using open hub destination. We get from template types: Data source, Info source, DSO, Info cube and info object to destination types are: data base table. File [Flat file & application server] & Third party tool, Crete Transformations and Create DTP and retract data
Newly added Process Types:
Total 43, in 43 16 are newly added. General: Interrupt Process, Between Multiple Alternatives, Workflow Remote also, is process chain still active. load & subsequent process: Data transfer process, Trigger event by Broad caster, Close request of info package, Data target administration: Initial & filling, Archive data from Info provider, Delete entire contents of Transactional DSO, Other BW Process: Delete request from change log, Executive planning in sequence, Switch real-time to planning mode & switch real time to loading mode, Retail: send POS sales data into XI system, Other: last customer contact update.
***************************************