Sap mdg online training | Learn SAP MDG with Best Online Career

Sap mdg online training | Learn SAP MDG with Best Online Career

SAP master data governance enables an observation audit trail through the master data lifecycle. Get SAP MDG Online training from Best Online Career, if you looking to make a career in the side of an organization. Best Online Career provides SAP MDG online training course with SAP MDG training material, 24/7 server access, with weekday and weekend Batches conducted by highly professional industry experts.
For More Information: Visit us: Call us: 8408878222

Posted by abhishekallentics on 2021-04-08 10:15:36

Tagged: , SAP , mdg , masterdata , mster , data , online , Training , onlinetraining , saponlinetrainingcourse , sapsdforbeginners , SAPARIBA , saptraininginhyderabad , Hyderabad , Bangalore


What Is SAP HANA Migration to Public Cloud?

Benefits of Deploying SAP HANA on Public Cloud

The leading organizations of tomorrow will thrive on data. That’s a given. The only question is the type of innovation organizations will use to harness data and improve business performance. The clear-cut answer at this moment is SAP-HANA. It is an in-memory database and application development platform for processing high volumes of data in real-time. It’s combination of speed, design, and analytical capabilities has many data enthusiasts excited.

The primary function of SAP-HANA as a database is to retrieve and sort data as requested by the application. In addition to its function as a database it can also perform advanced analytics such as predictive analytics, spatial data processing, text analytics, text search, streaming analytics, and graph data processing. Based on these features, a common misconception that rose up is that HANA is an acronym for High- performance, analytic, appliance. Contrary to popular belief HANA isn’t an acronym, it is just the name that SAP decided to give its database platform.

A book can be written on the value provided by it. However, we are only going to focus on the value of its in-memory database, predictive analytics, and deployment on cloud. These 3 value pillars are interdependent in terms of HANA’s overall value; however, each can stand-alone based on the value they each bring to an organization.

SAP HANA In-Memory Database

It is about 10x faster than a traditional memory base. This drastic increase in performance is a result of SAP-HANA keeping its data in-memory as opposed to disk-based hard drives. With in-memory processing speeds, users can access data immediately. Using the high-performance capabilities of an in-memory database does have a higher cost, however. To offset this higher cost, it can perform Dynamic tie ring, which helps users reduce costs by giving users the option of storing less frequently-accessed data in disk-based storage.

Predictive Analytics

SAP-HANA uses a number of analytic engines for different kinds of data processing. Out of its various kinds of data processing, predictive analytics has the most widespread value for businesses across all industries. Predictive analytics combines the depth and speed of in-memory analytics with the power of native predictive algorithms. Together with SAP’s predictive analysis for visualization and R’s extensive library of statistical data mining techniques, organizations have everything they need to predict the future in real time. Imagine being able to figure out what happened, why it happened, and what comes next in an instant. That is the value it brings in terms of predictive analytics. To make things even sweeter, using it for predictive analytics is easy. Analysts can use a drag-and-drop interface for selection, preparation and processing. They can also create models using predictive algorithms as well as algorithms from open source R. Every user in any line of business can now unlock key insights using it for predictive analytics.

Cloud Deployment

SAP customers have the option of deploying SAP HANA on premise or on cloud. Out of these two options, deploying it on public cloud provides the most compelling benefits. Since the value provided by its applications is already high, adding anything to improve the deliverability of it, makes it much more impactful. The primary benefits of deploying SAP HANA on cloud include:

Faster Deployment

Organizations no longer have to worry about siting, provisioning, and testing hardware infrastructure. With SAP-HANA on cloud, organizations can reduce the time to solution for SAP-HANA to hours.

Scalability of Landscape

With SAP-HANA and its complexities as a critical data storage and application platform, capacity planning can demand substantial time and resources. Organizations hosting it on cloud do not have to worry about capacity planning and can also reduce the up-front costs of carrying additional expensive high-memory hardware.

Reliable, Secure, Flexible Environments

The wide global presence of cloud providers enhances availability and provides greater choice in terms of disaster recovery solutions for SAP HANA.

Key Takeaways

As organizations of all shapes and sizes begin to harness more data, it is becoming apparent that SAP HANA is the bridge that will companies utilize data in new and exciting ways. A testament to this fact is the rise in SAP HANA adoption among enterprise customers. With it organizations will be able to leverage real-time data to identify critical trends and make optimal business decisions faster and more efficiently.

Source by Mahesh Reddy





Training cube has brought to you this SAP HYBRIS DATAHUB Online Training by Real Time Experts to Learn Skills on your convenient timings.

Interested People are to Registered in this link:

If any queries:
Call: +91-9848346149

Posted by am5372101 on 2018-08-31 07:00:49

Tagged: , sap , hybris , datahub , online , training


SAP HYBRIS DATAHUB online training

SAP HYBRIS DATAHUB online training


this SAP Hybris course helps the candidates in Hybris Management Console.

For more information Register Here:

If any queries:
Call: +91-9848346149

Posted by am5372101 on 2018-09-05 10:45:24

Tagged: , sap , hybris , datahub , online , training


Migrating Fixed Assets Into SAP – A Harlex Guide

1      Overview

1.1    Introduction

This document is intended as a user guide in how overcome the common problems in migrating fixed assets into SAP.

For a one-time conversion into SAP, we favour using the LSMW tool. It allows you to leverage the full power of ABAP while using standard SAP processing functions, yet it does a lot of the file management and processing work automatically. However, even within LSMW there are a number of possible methods for migrating fixed assets. 

This document will discuss loading fixed assets using the standard load program RAALTD01, although two alternatives are briefly discussed below. 

1.2    Load methods

1.2.1    BDC recording of transaction AS91

This is the simplest solution and so it might be suitable for a very basic upload. If for example you are not creating fixed assets in SAP, but rather updating one field in fixed assets which already exist in the system, then this might be the right approach. But it is not flexible enough to be used for the creation of fixed asset data.

1.2.2    Business object BUS1022

This would create IDOCs of type FIXEDASSET_CREATEINCLVALUES01 and process them through the SAP BAPI function BAPI_FIXEDASSET_OVRTAKE_CREATE.

This is possibly the solution that SAP would recommend. SAP is keen on BAPIs as they are powerful, flexible and can be easily called from an external system via an RFC. But that does not necessarily make them the right choice for data migration. The structure of BAPIs is not always particularly intuitive and the upfront development work can be complicated.

Also this method involves the processing of IDOCs. While the standard error handling and reprocessing functionality for IDOCs in SAP is impressive, it is not always transparent. Migrating fixed assets using this method would be suitable for someone who is particularly strong in the area of BAPIs and IDOCs..

1.2.3    Standard load program RAALTD01

This is generally the approach I would favor for the migration of fixed assets into SAP.

The standard load program is not perfect. As discussed later, there are one or two areas which it does not cover and like many SAP standard programs, it has its quirks: for example, when you come to load the assets, you do not have the option to create a BDC session yet if any of the transaction calls fail, then the program will create a BDC session for those records.

Nevertheless, it is a powerful and flexible program, and it is relatively simple to use. The fact that you can run the asset load in test mode before creating any data is also a major advantage, even though the test run does not always pick up 100% of the errors.

1.3    Assumptions

This document assumes a working knowledge of LSMW, and at least some basic understanding of the structure of fixed assets data in SAP.

2      The Basics

2.1    Data structures

There are two data structures in the asset load program RAALTD01 – BALTD and BALTB.

2.1.1    BALT D

This structure is mandatory and contains all the basic fixed asset master data.

2.1.2    BALT B

This structure is for what are known as asset transactions. The two common scenarios in which this structure needs to be populated are:

a) An asset was capitalised after the start of the current financial year (the current financial year being the year in which you are going to migrate these assets into SAP), or

b) An asset was disposed of in the current financial year

2.2    Important fields

Key Fields

BALTD- ANLN1             Asset number (normally not used – internal numbering)

BALTD- ANLN2             Asset subnumber (normally not used – internal numbering)

BALTD- BUKRS             Company code

BALTD- ANLKL             Asset class      

BALTD- OLDN1              Legacy asset main number

BALTD- OLDN2              Legacy asset sub number

BALTD- TCODE             SAP transaction           

BALTD- RCTYP             Record type

Master Data

BALTD- AKTIV               Capitalisation date        

BALTD- TXT50               Description      

BALTD- TXA50              Additional description   

BALTD- STORT              Location          

BALTD- WERKS            Plant    

BALTD- KOSTL              Cost centre      

BALTD- LIFNR               Vendor

BALTD- INVNR             Inventory number          

BALTD- LIEFE               Vendor name   

BALTD- AIBN1               Original Vendor Number

Depreciation Data (multiple records per asset)

BALTD- AFABEnn          Depreciation area         

BALTD- NDJARnn          Planned useful life (years)         

BALTD- NDPERnn         Planned useful life (months)      

BALTD- AFASLnn          Depreciation key          

BALTD- AFABGnn         Depreciation start date 

BALTD- KANSWnn         Gross book value         

BALTD- KNAFAnn          Accumulated depreciation

BALTD- NAFAGnn         Ord dep posted


BALTD- BWCNT Number of transactions

BALTB- BUKRS             Company code

BALTB- ANLKL             Asset class      

BALTB- OLDN1             Legacy asset main number

BALTB- OLDN2             Legacy asset sub number

BALTB- TCODE             SAP transaction

BALTB- BZDAT             Transaction date           

BALTB- RCTYP             Record type

BALTB- BWASL             Transaction type

BALTB- ANBTRnn          Amount

2.3    Modifying the standard asset structure

It is possible (and sanctioned by SAP) to modify SAP structures BALTD and BALTB. You only need to modify BALTB if you have added extra depreciation areas to BALTD.

You will of course need an object key to do so, but as long as the fields you are adding are active in transactions AS91, AS92, etc, then this is the only change you will need to make. RAALTD01 will do the rest.

Common reasons for modifying BALTD might be to remove to increase the number of investment keys (default setting is 2) or the number of depreciation areas (default setting is 8). See more on the depreciation areas below.

There is more information in SAP OSS note 23716.

3      Common Problems

3.1    Alpha conversion

As RAALTD01 is very closely linked with the direct upload program RAALTD11, it retains some of the features of a direct upload program. One of these is that it checks during upload whether key data referenced in the asset exists in SAP. It does this without making any alpha conversion.

So, if you are creating an asset with asset class (field ANLKL) ‘100’, you must specify this in the LSMW mapping in its internal SAP format, ie. ‘00000100’. The same goes for cost centres, vendors, etc.

3.2    Legacy asset number

It is crucial when migrating data into SAP to store a reference to the legacy data key.

It is important for business reasons – so that a user can easily see the link between their old data and the new – but it is also important for technical reasons. This is obvious for certain objects like vendors and customers where you need to store the link in order to be able to migrate follow-on transactional data like AR and AP. But it is also useful for assets.

It is very helpful in the test stages of a data migration to be able to run and rerun your load program without fear of loading duplicate information. By storing the legacy asset number somewhere in the asset master, you can easily check with some ABAP code in your LSMW whether this legacy asset has already been created in table ANLA.

The most common field for storing the legacy number is AIBN1 (Original Asset Number) but do not put your legacy number here without checking. This field was intended to be used for the original asset number in SAP after an it has been transferred to a new number.

If using AIBN1 is going to be a problem, another option is ANLH-ANLHTXT. This is a text field which is often unused.

One red herring in the load program is OLDN1 (Old asset number). This field exists in the load structures but not in the database tables. It is only used in the processing of the load program. See further information on this field below in ‘Other quirks’.

If you find that AIBN1 and ANLHTXT are not appearing in the AS91 screens, you can change the screen layouts in customising.

Financial Accounting > Asset Accounting > Master Data > Screen Layout > Define Screen Layout for Asset Master Data

3.3    Missing customising

Before running your test and live asset migrations, please check that the following customising is in place. For some reason, these steps are often overlooked by the FI-CO functional consultants:

Transfer date – this should be set to close to the date you migrate the assets. As AS91 is specifically for data migration, it expects that the capitalisation date for all the assets you migrate will be before the transfer date. Check this in table T093C.

Current fiscal year – also in T093C, check that this has been correctly set.

Number ranges – transaction SNUM.

3.4    NBV – Net book value

It is not possible to directly migrate the net book value of an asset. You must migrate the gross book value (acquisition cost) and the accumulated depreciation. SAP will then calculate the NBV.

3.5    Time-dependent data

Some of the asset data is time-dependend, ie. you can see the history of these fields. Example fields are cost centre, plant, internal order, location and business area. They are all stored in table ANLZ. The standard SAP load programs can only handle the current values of these fields. You cannot migrate multiple ANLZ records per asset.

If you need to do this, you should first create your assets using RAALTD01 and load the initial values of these fields. Then create another LSMW program using an AS92 recording to upload any changes.

3.6    Mid-year asset migration

Due to how the fixed asset depreciation links in with the General Ledger, if you are migrating assets midway through a fiscal year you have to split the depreciation to date into two amounts: depreciation up to the end of the previous fiscal year, and depreciation during the current fiscal year.

One way of doing this is in your LSMW code: in the Takeover Values screen, enter the depreciation amount up to the end of the previous year into the field Accumulated Depreciation (BALTD-KNAFAnn) and the amount of depreciation in the current year into the field Ordinary Depreciation Posted (BALTD-NAFAGnn). This is generally the most popular method.

The second way is to transfer the assets as at the end of the previous fiscal year and then run depreciation in SAP for all months in the current year. This involves more work for the functional consultants.

3.7    Assets created in this year

For the same reasons as described in ‘Mid-year asset migration’ above, you have to distinguish in SAP between assets created in previous years and assets created in current years. Your capitalisation in the current year needs to be identified as a transaction.

In this situation you will need to post both a BALTD record and a BALTB record.

Your posting will differ from the standard asset creation in the following ways:

·         BALTD-BWCNT needs to be populated with ‘0001’ (assuming you are only posting one transaction)

·         You need to map the acquisition value into BALTB-ANBTRnn with transaction type ‘100’ instead of mapping it as normal to BALTD-KANSWnn

·         BALTD-KNAFAnn does not need to be populated

·         BALTD-NAFAGnn should be populated with the depreciation amount

·         BALTB-BZDAT should be populated with the capitalisation date

3.8    Asset disposals

Disposals in the current fiscal year must also be identified as transactions.

In this situation you will need to post both a BALTD record and a BALTB record.

Your posting will differ from the standard asset creation in the following ways:

·         BALTD-BWCNT needs to be populated with ‘0001’ (assuming you are only posting one transaction)

·         You need to map the disposal value into BALTB-ANBTRnn with transaction type ‘200’

·         BALTB-BZDAT should be populated with the date of disposal

3.9    Depreciation areas

When creating fixed assets in SAP you will often populate multiple depreciation areas. Common depreciation areas set up in SAP might be Local Depreciation, Group Depreciation (if your company is international) and Tax. The depreciation rules for each of these might be slightly different, therefore you will create one depreciation area for each.

Rarely, you might need more than the 8 depreciation areas provided as a default by SAP in structure BALTD and BALTB. I have only ever experienced this once while loading fixed assets in Italy. It was necessary due to the multiple currency devaluations that the Italian lira had undergone over the past thirty years or so.

This can be handled by modifying the standard structures BALTD and BALTB. See information on this above in the section ‘Modifying the standard asset structure’.

3.10Other quirks

3.10.1Invalid characters

You may not put the hash character # in any of the BALTB or BALTD fields.

3.10.2Long texts

The standard upload program does not handle long texts. These would need to be loaded in a separate program. But this situation occurs rarely.

3.10.3’Unexpected record type found’

This error occurs if you enter the wrong record type in the two RCTYP fields. It should be ‘A’ in BALTD and ‘B’ in BALTB.

However, it you are sure that you do not have this problem and you are still getting the error, check whether any of the master data you have referenced exists. For example, are all your cost centre numbers valid. I have had this error in the past when migrating assets with transactions. The ‘real’ error of invalid data in your BALTD record is not given by SAP. It gives this error instead.

The moral of this story: always validate your master data fields in LSMW when loading fixed assets. A set of user routines such as CHECK_KOSTL, CHECK_WERKS, CHECK_LIFNR, etc, can save you a lot of time.

3.10.4Audit and Error report

The error reporting is dreadful in RAALTD01 if you do not populate the field BALTD-OLDN1. If you do not populate this field, the program will list any errors you have with the asset description only, which is not particularly helpful. So, always populate OLDN1 even if you are not migrating transactions.

4      Footnote

This guide should be viewed as a starting point for discussions and is not intended as an exhaustive examination of the various methods available. There will inevitably be circumstances specific to individual situations that cannot be covered here.

You can find more Harlex Guides to SAP data migration here:

For further information on the migration of fixed assets into SAP or indeed on any data conversion topic, please contact Harlex at:

Source by Ben McGrail


PiLog’s Material Master Taxonomy 1.0 For SAP Master Data Governance

The PiLog’s Master Data Management (MDM) strategy offers end-to-end consulting, implementation services & data quality management. MDM is all about solving business issues or problems and improving data trustworthiness through the effective & seamless integration of information with business processes.
PiLog’s worldwide leadership in Master Data Governance (MDG) and in particular in the sphere of the material master has once again been proven. Dr Salomon de Jager, PiLog Group CEO said that PiLog’s 21 years of experience in MDG in many global companies and various Industries have resulted in best practice methodologies and software implementations being built into SAP technology. This enables SAP MDG users to effectively govern material master data in accordance with ISO8000 and ISO22745.

PiLog’s industry proven taxonomy, processes and methodologies are combined as an integrated add-on to SAP MDG-M in order to add immense value to supply-chain management processes. This drastically reduces implementation time and costs. This PiLog solution increases the efficiency of the material MDG within an enterprise. Quality material masters leads to quality data, and effective procurement processes has proved to save companies millions of dollars in various areas of the business.

For more information contact PiLog’s local representative,
Click here to view PiLog’s SAP Certified Solutions Directory

About PiLog Group:
Established in 1996, PiLog Group is a global group of independent companies, specializing in Quality Data and MDG solutions. Today PiLog is a leading provider of Master Data Quality Solutions, supporting multiple master data domains in a variety of industries all over the globe. The PiLog solutions are state of the art, focused on creating a common business language and managing the rules for the creation of high quality, multilingual descriptions for our clients. PiLog provides exclusive technical dictionary content that is the culmination of research, development, and execution over the past twenty years.

PiLog also provides Master Data Management Services, including data cleansing and classification services governed and maintained through our software applications. Our processes and systems for Master Data Governance (MDG) are designed for compliance with ISO 8000, the international standard for quality master data.

The PiLog multilingual master data enrichment labs have extensive capabilities with built-in interactive quality control giving the client direct online access to project information, real-time monitoring, and acceptance testing. The enrichment labs are strategically located and managed to deliver high quality multilingual localization on time and on budget.

SAP, SAP MDG-M and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. See for additional trademark information and notices. All other product and service names mentioned are the trademarks of their respective companies.

For more information, contact: Imad Syed, CIO PiLog Group, e-mail:, phone: +91 6301760928 and +91 6302994681

Source by Shoukat Ali