Quantcast
Channel: Jive Syndication Feed
Viewing all 3611 articles
Browse latest View live

Solution Manager Update - One month to go!

$
0
0

See below the July and August Expert Guided Implementation schedules (EGI’s) – these are free with Enterprise Support.


Solution Manager 7.2 – One month to General Availability

Having covered some key features of 7.2 in previous blog I am now getting excited by what is going to improve my life as a user of Solman ITSM. Below are the 5 key enhancements I am looking forward to getting my hands on after our upgrade (time to start planning!).


New Fiori Launch Pad. This will simplify the access to my normal daily processing and bring back the count of new incidents in my queue.

Solman_Fiori.png

New responsive “my incident” Fiori app for mobile processing. This is a biggie as I am on the road allot and struggle to process incidents in a timely manner.

Solman_myincident.png

Embedded fuzzy search across all objects. This does require TREX if not going to HANA database.Solman_search.png


Auto refresh of widgets/searches. This will provide push notifications so we don’t have to run reports to check for new incidents etc.

Solman_Auto.png

New Analytics. Configurable KPI content with new SAPUI5 Analytics Launchpad.

Solman_Analytics.png

E-mail Management Enhancements. Can finally send HTML mail with URL link to Incident. This will also enable us to remove customization for creating incident from inbound e-mail.


All in all, a decent amount of improvements to look forward to. Also Remember to Register for SAP TechEd 2016 - Las Vegas, September 19–23, 2016 - Save $100 through August 26. GROM will have a booth and will be discussing our experiences with Solution Manager 7.2 in great detail.


EGI’s – North America Schedule for July 2016:

To view Online schedule for Expert Guided Implementation sessions select all delivery methods and use the "Region" search filter to refine the search results:

  Meet the Expert (1-hour presentation – replays available):

  North America Schedule for August 2016: 

  Meet the Expert (1-hour presentation – replays available):


SAP HANA - What's there for Customer's

$
0
0

What are the benefit of SAP HANA for Customers?

 

SAP HANA is a break through solution for in-memory databases in the industry. SAP HANA claims that it accelerates analytics and applications on a single, in-memory platform as well as combining databases, data processing, and application platform capabilities.

SAP HANA is a next-generation business platform which brings together

  • Business transactions
  • Advanced analytics
  • Social media
  • Mobile experience
  • Collaborative business
  • Design connection


image1.JPG

 

So now the questions arises “How does this help my business?”, “How can I get more ROI?”

 

 

 

 

 

Here are the top reasons which describes the benefit of choosing SAP HANA over other RDBMS


Note : Actually HANA is a Data Platform, not just a database.

 


                                                                                SPEED


SAP HANA provides a foundation on which to build a new generation of applications, enabling customers to analyze large quantities of data from virtually any source, in real time.

 

Image2.JPG

 

Experience the real-time Enterprise in action


A live analysis by a consumer products company reveals how SAP HANA analyzes current point-of-sale data in real time— empowering this organization to review segmentation, merchandising, inventory management, and forecasting information at the speed of thought.

 

Image3.JPG

 

                                     

                                        REAL TIME


The real benefit of SAP HANA is its capabilities of processing large amount of data in “real” real-time enterprise through the most advanced in-memory technology.

 

The in-memory approach enables much faster data processing and should allow companies to run far more sophisticated data analytics applications compared to conventional relational databases.

HANA's ability to let large data sets be manipulated in memory enables enterprises to run more ad hoc queries against business data, and reduce the need for pre-define cubes and queries.

 

                                                                ANY DATA

Pull up-to-the-minute data from multiple sources. Evaluate options to balance financial, operational, and strategic goals based on today’s business.

 

Such capabilities are becoming increasingly useful, especially in industries such as the retail sector, health care, scientific analysis and financial services. There has been an explosion in the amount of structured and unstructured data that organizations have been able to collect and process in very significant time.

 

image4.JPG

 

 

                                                          ANY SOURCE

 

SAP HANA can be integrated into a wide range of enterprise environments, allowing it to handle data from Oracle databases, Microsoft SQL Server, and IBM DB2.

 

Provides multiple ways to load customer’s data from existing data sources into SAP HANA.

 

                                                                CHANGE


SAP HANA changed the meaning of analytics. Before the innovation analytics used to be considered a quite a challenge in itself, earlier analytics used to defined as follows-:

  • Preconfigured dashboards based on fixed business requirements.
  • Long wait times to produce custom reports.
  • Reactive views and an inability to define future expectations.

 

However, this has changed drastically with SAP HANA, now we can ASK QUESTIONS


image5.JPG
quickly and easily create ad-hoc views without needing to know the data or query type - allowing to formulate actions based on deep insights

image6.JPG
Receive quick reactions to newly articulated queries so you can innovate new processes and business models to outpace the competition.

 

INNOVATION & COST


SAP HANA configuration, easy integration, and revolutionary capabilities make it flexible enough for virtually anything customer’s business requires.

                         Energy Management

Utility companies use SAP HANA to process and analyze vast amounts of data generated by smart meter technology, improving customers’ energy efficiency, and driving sustainability initiatives.


Real-time Transit Routing

SAP HANA is helping research firms calculate optimal driving routes using real-time GPS data transmitted from thousands of taxis.


Software Piracy Detection and Prevention

Tech companies use SAP HANA to analyze large volumes of complex data to gain business insights into software piracy, develop preventive strategies, and recover revenue.

 

SAP HANA reduces customer’s total IT cost so customers can increase spending on innovation.

                                      CLOUD


SAP HANA Cloud Platform is an open platform-as-a-service (PAAS) that provides unique in-memory database and application services.


SAP HANA Cloud Platform includes the following services:

 

SAP HANA AppServices -: SAP HANA AppServices builds on the capabilities of SAP HANA DB Services. Is ideal for the creation of innovative, consumer grade applications, and for the extension of cloud and on premise applications.

 

It is packed with features that enable the real-time, secure applications required to succeed in today’s always-on, mobile, social and data driven world.

 

SAP HANA DB Services -: is an easy, low cost way to be up and running with a fully supported SAP HANA system with monthly license and infrastructure subscriptions.

 

With monthly subscriptions in configurations from 128GB to 1TB, SAP HANA DB Services delivers fast provisioning of SAP HANA and hardware, and includes a cloud management console for easy configuration and administration.

 

Allows customers to build real time analytic applications using the development capabilities of SAP HANA.

 

SAP HANA Infrastructure Services -: enables customers to quickly deploy and manage their prelicensed SAP HANA instances without hardware investments and setup time, by purchasing infrastructure subscriptions.

 

SAP’s infrastructure subscription is a scalable, affordable way to deploy SAP HANA licenses in the cloud. Also included is the SAP Data Services component of SAP HANA Cloud Integration, providing seamless integration with SAP back-ends and heterogeneous sources.

 

SAP HANA on cloud services are available with AWA & now with AZURE too.


OPTIONS


SAP HANA provides you choice at every layer to work with customer’s preferred partners.

 

  • Run on the hardware of customer’s choice.
  • Work with the software customer prefers

 

Collaboration with a number of partners means that SAP can complete the software stacks of our diverse customer base in configurations that make sense for their business. Plus, a variety of different options means that customers won’t be locked in by a single provider.

image7.JPG

 

                                

 

 

 

 

 

START SUM TOOL ON LINUX

$
0
0

STARTING SUM TOOL ON LINUX SYSTEM

 

The starting procedure of SUM TOOL in Windows and UNIX is different.

 

In WINDOWS we can directly start tool by running batch file (SUMSTART.BAT).

 

But in LINUX you cannot start sum tool directly.

 

Software Required:

 

SAPHOSTAGENT.SAR

 

SUM TOOL

 

Extract both the file using sapcar.

 

./SAPCAR -xvf <Path of sar file/SAR FILE > -R <PATH to where you want to extract>

 

INSTALLATION OF SAP HOSTAGENT

 

Go to the path where SAP HOSTAGENT is extracted.

 

Run the following command

 

./saphostexec -install

 

HOST AGENT  will get installed.

 

EXTRACTING AND GIVING PERMISSION TO SUM

 

After extracting the SUM TOOL SAR File

 

GIVING the PERMISSION and OWNERSHIP to SUM Folder

 

We have to change the ownership of SUM Folder from root to <SID>adm


Use following Command:


chown <SID>adm:sapsys -r <SUM>


Changing the Permission of SUM Folder


Use following Command:


chmod 755 -r <SUM>


STARTING SUM TOOL


We cannot start SUM gui in Linux.


From ROOT USER


Go to the SUM Folder


Run the following Command:


./SUMSTART confighostagent <SID>


It will start services.

 

Now change from ROOT user to <SID>ADM User.

 

su <SID>adm

 

Rum the following command.

 

./SUMSTART

 

Now open the Browser in WINDOWS System.


Enter the following URL


http://<hostname>:1128/lmsl/sumabap/<SID>/doc/sluigui


hostname : hostname of the linux server where sum tool is running.


It prompts to enter the Username and password

USER : <SID>ADM

PWD:******

 

SUM TOOL GET STARTED.

When to use MRP Live (MD01N) as compared to Classic MRP(MD01)

$
0
0

When to use MRP Live as compared to conventional MRP:

 

Though the Material requirement planning (MRP) from the outside world looks simple process by picking the receipts (supply)  of different elements such as vendor receipts, stock transfers and in-house production etc.., Issues to meet the customer demand, the system will calculate the requirements for future needs, there are lot of issues in today’s world that impacting the business . Some of the challenges that MRP systems are facing today are -  systems are not able to predict the right quantities to meet the future needs, unable to capture the latest changes in the supply and demand, Increase in the MRP processing time due to increase in the volume of data in terms of master data and transactional data.  In some of the SAP projects it has been observed that due to program performance and database issues, MRP runs executed in a batch job mode at certain intervals and not in regular interval even though there is a business need.


SAP has come up with a new MRP program powered by HANA that will address the issues mentioned in the above section.  This document provide some insights on when to use MRP live, features of MRP live and steps to be followed while setting up MRP live program.


MRP can be executed in two ways i.e either through classic MRP transaction (MD01) or with MRP Live (MD01N) where the SAP system has been powered by SAP HANA. It means MRP live can be used by business suite on HANA or S4H 1511 system onwards. Instead of using multiple transactions such as MD01, MD02 and MD03, one can use a single transaction MD01N to run either in foreground or background with multiple options. Here is the selection screen of MD01N that can be executed with multiple combinations.

 

pic1.png

 

   Another additional feature with MRP live is by selecting the check box of “Stock transfer materials”, it is possible to plan material in the supplying plant if there are stock transfer requirements associated with a particular material. Since performance is not a constraint with HANA,  SAP has taken out “Net change planning in planning horizon” in the selection screen.


Also, for any reason if the BOM components are having issues in MRP live, the system will execute them using classical MRP program (MD01)


The outcome of MRP Live run is as follows.


pic1.png


MRP Live also can be executed using MRP fiori apps that have been discussed in my other blog. refer this blog to get more details on the list of apps available for planning purpose. The MRP Fiori Apps can only be used if the business function LOG_PPH_MDPSX_READ is active.


http://scn.sap.com/community/erp/manufacturing-pp/blog/2016/06/25/analysis-of-fiori-applications--production-planning-lob

 

To improve the performance of MRP live on HANA (MD01N) further, one can exclude certain materials and those materials can be executed using MRP classic run ( MD01). Ideally  if the volume of data is high, it is suggested to classify the material based on consumption pattern, business priority, demand fluctuations etc. Materials that have regular consumption and where the demand keep changing continuously, it is better to go for MRP live for those materials. MD_MRP_FORCE_CLASSIC transaction will be helpful to exclude materials from MRP live on HANA and to run with classical MRP.


Selection screen:

 

pic1.png

 

Setting up classic MRP run for a set of materials:

 

pic1.png

 

 

Also, this report will be helpful to restrict the display of materials in MRP fiori apps. It means if the business would like to exclude “C” class items from Fiori app, you can filter them out using this program.


pic1.png


Here are some of the benefits of  MRP Live

 

pic1.png


The functions that are not supported by MRP live as of current version has been summarized below and SAP is planning to add some of the functionalities in the upcoming versions.


Functions Not Supported

Reason

How to Upgrade

BAdIs no Longer Supported

MRP Live (transaction MD01N) does not process BAdIs for materials that are completely planned in SAP HANA.

For such materials, you have to force the MRP Live run to call classic MRP by setting the Plan in Classic MRP indicator in transaction MD_MRP_FORCE_CLASSIC.

Total Requirements not Supported.

Total customer and total dependent requirements are not supported.

Total requirements were used for performance reasons which is now no longer necessary.

Before activating the business function, make sure you have no total requirements in your system.

Planning Horizon

MRP Live does not support net change planning in the planning horizon. The planning horizon is ignored.

This planning run type was a performance measure only which is now no longer necessary.

After the upgrade, a new report is available to carry out MRP Live. The new report does not offer this planning run type.

Planning File Entries in Table MDVM

Only planning file entries in table DBVM are supported. Planning file entries in table MDVM are no longer supported.

Performance of reading DBVM is much better because it has a proper key.

Run report RMDBVM00 if it has not already run before.

Planning Sequence of Plants

Table T439C has been decommissioned.

Table T439C only allows you to define the planning sequence of locations not location materials. With the sequence defined in table T439C you cannot support the stock transfer of different materials in different directions between the same two locations.

The MRP logic determines the planning sequence of plants automatically. There is no need to define a planning sequence in a table. No special activities are necessary.

Creation Indicator for Purchase Requisitions

The creation indicator for purchase requisitions is not available in MRP Live. MRP Live always creates purchase requisitions for external procurement. It does not support the process where planned orders are created first that have to be converted into purchase requisitions later.

Simplification of the procurement process. The reason for using planned orders in the external procurement process was to separate responsibilities. The purchaser was responsible for purchase requisitions and the production planner for planned orders. The conversion of planned orders into purchase requisitions transferred responsibility from the production planner to the purchaser. This process is no longer required.

After the upgrade, a new report is available to carry out MRP Live. The new report does not support the creation of planned orders for external procurement.

If you require planned orders for external procurement, you have to use classic MRP.

Creation Indicator for Delivery Schedule Lines

The creation indicator for delivery schedule lines is not available in MRP Live. MRP Live always creates delivery schedule lines if the material’s source list entry tells MRP to do so.

There is no conversion of purchase requisitions into delivery schedule lines. Therefore an MRP run that does not create delivery schedule lines, renders delivery schedules useless.

No upgrade activity necessary.

Destinations for Parallel Processing

For MRP Live, destinations for parallel processing no longer have to be defined in table T462A. The Customizing activity Define Parallel Processing in MRP is no longer necessary.

Destinations can be determined automatically using server groups. There is no need to define them separately

Define server groups.

Subcontractor Planning Segments

Subcontracting is only supported with explicit MRP areas (BERTY 03). Subcontractor planning segments (PLAAB 26) are not supported.

Subcontractor MRP areas support all features of subcontractor planning segments. This is an attempt to reduce redundant features.

You have to activate subcontractor MRP areas by creating an MRP area of the type subcontractor for every subcontractor. Then you have to assign every subcontracting part and every vendor that requires the part to the MRP area.

Multi-level, make-to-order planning (transaction MD50) is not optimized for MRP Live.

Multi-level, make-to-order planning was a performance measure only which is no longer necessary.

Use MRP Live for the top-level material and include the BOM components in planning.

Individual project planning (transaction MD51) is not optimized for MRP Live.

Individual project planning was a performance measure only which is no longer necessary.

Use MRP Live for the top-level material and include the BOM components in planning.

Weak reference in ABAP and Java

$
0
0


Recently I have some recruitment interview on the topic Weak reference and I think it is necessary for me to refresh my knowledge on this topic. When I write small program to verify my assumption, I meet with some trouble and great thanks to Horst Kellerwho has helped me out


According to ABAP help, we can wrap an object reference to the so called weak reference, for example see the following code:


lo_person = NEW lcl_person( 'Jerry' ).
lo_weak = NEW cl_abap_weak_reference( lo_person ).

clipboard1.png

And later we can get the reference to Jerry back via get method provided by weak reference.

lo_person = CAST lcl_person( lo_weak->get( ) ).

 

The lo_person will become initial only on the conditional that there is no reference pointing to it when Garbage Collector is running. To verify it I wrote the following small program:

 

REPORT ztest.
PARAMETERS: clear TYPE char1 as CHECKBOX DEFAULT abap_true,            gc TYPE char1 as CHECKBOX DEFAULT abap_true.
CLASS lcl_person DEFINITION.  PUBLIC SECTION.    DATA: mv_name TYPE string.    METHODS: constructor IMPORTING !iv_name TYPE string.
ENDCLASS.
CLASS lcl_person IMPLEMENTATION.  METHOD: constructor.    me->mv_name = iv_name.  ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.  DATA: lo_person TYPE REF TO lcl_person,        lo_weak   TYPE REF TO cl_abap_weak_reference.  lo_person = NEW lcl_person( 'Jerry' ).  lo_weak = NEW cl_abap_weak_reference( lo_person ).  IF clear = abap_true.     CLEAR: lo_person.  ENDIF.  IF gc = abap_true.     cl_abap_memory_utilities=>do_garbage_collection( ).  ENDIF.  lo_person = CAST lcl_person( lo_weak->get( ) ).  IF lo_person IS INITIAL.     WRITE: / 'reference not available'.  ELSE.     WRITE: / 'reference still available'.  ENDIF.

There are two switches for this report. The first switch controls whether the reference points to Jerry will be cleared or not, the second one for the fact whether Garbage collection should explicitly be called.

clipboard1.png

There are four possibilities of combination, and the corresponding result are listed below:

 

clear reference to Jerry?

 

call Garbage collection?

object pointed by weak reference is cleared
by Garbage collector?

Yes

Yes

Yes

Yes

No

No

No

Yes

No

No

No

No

In first scenario, in memory snapshot we can clearly see that the object pointed to by lo_person is deleted.

clipboard2.png

For the left three scenarios, lcl_person instance is not deleted by Garbage collector:

clipboard3.png

The weak reference in Java behaves the same as ABAP. You can use the following Java code to test it and get the same result:

 

import java.lang.ref.WeakReference;
class Person{
private String mName;
public Person(String name){  this.mName = name;
}
public String getName(){  return this.mName;
}
}
public class WeakReferenceTest {
public static void check(Person person){  if( person == null){   System.out.println("Reference invalid");  }  else {   System.out.println("Reference still available");  }
}
public static void main(String[] args) {  Person jerry = null;  WeakReference<Person> person = new WeakReference<Person>(new Person("Jerry"));  jerry = new Person("Ben");  // if you comment out this line, Reference will be available  System.gc();  Person restore = person.get();  check(restore);
}
}

Soft reference

 

In my system ( Netweaver 750 SP4 ), the help says soft reference is not implemented.

clipboard4.png

Since it is not implemented in ABAP, we can test in Java instead.

 

Use the following Java code to test:

package reference;
import java.lang.ref.SoftReference;
import java.util.ArrayList;
class Person2{
private String mName;
public Person2(String name){  this.mName = name;
}
public String getName(){  return this.mName;
}
public void finalize(){  System.out.println("finalize called: " + this.mName);
}
public String toString(){  return "Hello, I am " + this.mName;
}
}
public class SoftReferenceTest {
public static void main(String[] args) {  SoftReference<Person2> person = new SoftReference<Person2>(new Person2("Jerry"));  System.out.println(person.get());  ArrayList<Person2> big = new ArrayList<Person2>();  for( int i = 0; i < 10000; i++){   big.add(new Person2(String.valueOf(i)));  }  System.gc();  System.out.println("End: " + person.get());
}
}

This simple program will generate the following output in console:

Hello, I am Jerry
End: Hello, I am Jerry

 

The reason is, although I have created 10000 Person instance to consume some memory, however the memory consumption is still not so huge enough to make the Jerry person instance wrapped in Soft reference deleted. As a result after System.gc() is called, the reference is still available.

 

In real Java world , Soft reference is usually used to implemented cache mechanism where limited memory resource is available, for example in Android application development.

 

PhantomReference

 

There is another kind of reference in Java: PhantomReference. You can use the following code to test:

 

package reference;
import java.lang.ref.PhantomReference;
import java.lang.ref.ReferenceQueue;
public class PhantomReferenceTest {
public static void main(String[] args) {  Object phantomObj;  PhantomReference phantomRef, phantomRef2;  ReferenceQueue phantomQueue;  phantomObj = new String("Phantom Reference");  phantomQueue = new ReferenceQueue();  phantomRef = new PhantomReference(phantomObj, phantomQueue);  System.out.println("1. Phantom Reference: " + phantomRef.get());  System.out.println("2. Phantom Queued: " + phantomRef.isEnqueued());  phantomObj = null;  System.gc();  System.out.println("3. Anything in Queue? : " + phantomQueue.poll());  if (!phantomRef.isEnqueued()) {   System.out.println("4. Requestion finalization.");   System.runFinalization();  }  System.out.println("5. Anything in Queue?: " + phantomRef.isEnqueued());  phantomRef2 = (PhantomReference) phantomQueue.poll();  System.out.println("6. Original   PhantomReference: " + phantomRef);  System.out.println("7. PhantomReference from Queue: " + phantomRef2);
}
}

This program will generate the following output:

 

1. Phantom Reference: null
2. Phantom Queued: false
3. Anything in Queue? : null
5. Anything in Queue?: true
6. Original   PhantomReference: java.lang.ref.PhantomReference@2a139a55
7. PhantomReference from Queue: java.lang.ref.PhantomReference@2a139a55

Unlike WeakReference or SoftReference,  the object wrapped in PhantomReference is never accessible, this is the reason why you see first line of output: "1. Phantom Reference: null".


In constructor of PhantomReference, I have passed a queue as argument. When the object reference in PhantomReference is removed by Garbage collector, the PhantomReference itself ( not the deleted object reference ) will be inserted to that queue by JVM. Before System.runFinalization() is called, phantomRef.isEnqueued() returns false and phantomQueue.poll() returns nothing. After phantomObj is deleted by JVM, the PhantomReference is put to the queue and could be accessed by phantomQueue.poll(). By comparing object reference in line 6 and line 7 output, we can ensure that these two PhantomReference are exactly the same.

Configure Native Spark Modeling in SAP BusinessObjects Predictive Analytics 3.0

$
0
0

Native Spark Modeling feature has been released since SAP BusinessObjects Predictive Analytics version 2.5. This version supported Native Spark Modeling for classification scenarios. The latest release of SAP BusinessObjects Predictive Analytics (version 3.0) now supports regression scenarios as well. The business benefits gained from Native Spark Modelling are primarily able to train more models in shorter period of time, hence obtain better insights of the business challenges by learning from the predictive models and targeting the right customers very quickly.              

 

Native Spark Modeling is also known as IDBM (InDatabase Modeling), with this feature of SAP BusinessObjects Predictive Analytics the model training and scoring can be pushed down to the Hadoop database level through Spark layer. Native Spark Modeling capability is delivered in Hadoop through a Scala program in Spark engine.

 

fig1.png

In this blog you will get familiar with the end-to-end configuration of Native Spark Modeling on Hadoop using SAP BusinessObjects Predictive Analytics.

 

Steps to set up Native Spark modeling:

Let’s review the configuration steps in detail below:

 

1. Install SAP BusinessObjects Predictive Analytics


According to your deployment choice, install either the desktop or the client/server mode. Refer to the steps mentioned in the installation overview link or installation guides Install PAsection installation.

During installation, all the required configuration files and the pre-delivered packages for Native Spark Modeling will be installed in the local desktop or server location.


2. Check SAP BusinessObjects Predictive Analytics installation


In this scenario, SAP BusinessObjects Predictive Analytics server has been chosen as deployment option and it will be installed in on a Windows server. After the successful installation of SAP BusinessObjects Predictive Analytics server, within Windows server local directory you will be able to see the folder structure as below.

 

.    fig2.png

As SAP BusinessObjects Predictive Analytics 3.0 Server is installed, on the Windows server navigate into the SAP Predictive Analytics\Server 3.0\ folder. You will see the folder SparkConnector which contains all the required configuration files and the developed Native Spark Modeling functionality in form of ‘jar’ f iles.

fig3.png

Click on the SparkConnector folder to check the following directory structure. The below folder structure will show up.

fig4.png

 

3. Check whether the winutils.exe file exists in “bin” folder for windows installation


Apache Spark requires the executable file winutils.exe to function correctly on the Windows Operating System when running against a non-Windows cluster.


fig5.png


4. Check the required client configuration xml files in “hadoopConfig” folder


Create a sub folder for each Hive ODBC DSN. For example, in this scenario the sub folder is named “IDBM_HIVE_DUB_CLOUDERA”.(Note: This is not a fixed name, you can name it according to your preference).


Each sub folder should contain the 3 Hadoop client XML configuration files for the cluster (core-site.xml, hive-site.xml, yarn-site.xml). Download client configuration xml files. You can use admin tools such as Hortonworks Ambari or  Cloudera Manager to download these files.

fig6.jpg

Note: This sub folder is linked to the Hive ODBC DSN by the SparkConnections.ini file property "HadoopConfigDir", not by the subfolder name.


5. Download required Spark version jar in the folder “Jars”


Download the additional assembly jar files from the link below and copy them into the SparkConnector/Jars folder.

fig7.png

 

 

 

6. Configure Spark.cfg (for client-server server mode) or KJWizardjni.ini (for desktop mode) to set the right spark version and path


As SAP BusinessObjects Predictive Analytics server is installed here, within the Server 3.0 folder open the Spark.cfg file in Notepad or any other text editors. Native Spark Modeling supports both Spark versions offered by two major Hadoop enterprise vendors at present day (Cloudera and Hortonworks).

As the Cloudera Hadoop server is being used in this scenario, you should keep the configuration path of Spark version 1.5.0 of Cloudera server active in the Spark.cfg configuration file and comment out the Hortonworks server’s Spark version. Also path to connection folders and some tuning options can be set here.


Navigate to folder location: C:\Program Files\SAP Predictive Analytics\Server 3.0\SparkConnector\ and edit Spark.cfg file.

fig9.png

For Desktop the file location: Navigate to the folder location C:\Program Files\SAP Predictive Analytics\Server 3.0\EXE\Clients\KJWizardJNI and edit KJWizardJNI.ini file.


fig11.png

7. Set up Model Training Delegation for Native Spark Modeling-

 

In Automated Analytics Menu, navigate to the following path. File -> Preferences -> Model Training Delegation.

By default the “Native Spark Modeling when possible” flag should be SWITCHED ON, if it is not, please ensure it is SWITCHED ON. Then press OK button.

fig12.png

8. Create an ODBC connection to Hive Server as a data source for Native Spark Modeling


This connection will be later used in Automated Analytics to select Analytic Data Source (ADS) or Hive tables as input data source for the Native Spark modeling.

  • Open the Windows ODBC Data Source Administrator
  • In the User DSN tab press Add
  • Select the: 'DataDirect 7.1 SP5 Apache Hive Wire Protocol' from the driver list and press Finish

fig13.png

  • In the General tab enter:
  • Data Source Name: IDBM_HIVE_DUB_CLOUDERA (This is just an example – no fixed name is compulsory for this to work)
  • Host Name: xxxxx.xxx.corp
  • PortNumber: 10000
  • Database Name: default

fig14.png

  • In the Security tab set Authentication Method to: '0 - User ID/Password' and set the User Name and password.
  • SWITCH ON the flag “Use Native Catalog Functions”. Select Use Native Catalog Functions to disable the SQL Connector feature and allow the driver to execute HiveQL directly.

fig15.png

  • Press the "Test Connect" button.
  • If the connection is successful, press APPLY and then OK. If the connection test fails even when the connection information is correct, please make sure that the Hive Thrift server is running.

 

9. Set up the SparkConnection.ini file for your individual ODBC DSN

 

This file contains Spark connection entries, specific for each particular Hive data source name (DSN). For example, in the case that there are 3 Hive ODBC DSNs , the user has a flexibility to say two should run on IDBM and not the last one i.e. 1 of the DSNs not present in SparkConnection.ini file will fall back to normal modelling process using Automated Analytics engine. To set the required configuration parameters for Native Spark Modeling, navigate to the SAPBusinessObjects Predictive Analytics 3.0 Desktop/Server installation folder (in case of server go to folder location: C:\Program Files\SAP Predictive Analytics\Server 3.0\SparkConnector\ OR in the case of a Desktop installation, go to the folder location C:\Program Files\SAP Predictive Analytics\Desktop 3.0\Automated\SparkConnector and edit the SparkConnections.ini file then save it.

 

fig16.png

As in this scenario a Cloudera Hadoop box is being used you need to set the parameters in the file as per the configuration requirement of Cloudera clusters.


For Cloudera Clusters:


  • To enable Native Spark Modeling against a Hive data source, you need to define at least the below minimum properties.

Each entry after "SparkConnection" needs to match exactly the Hive ODBC DSN (Data Source Name).


  • Upload the spark 1.5.0 assembly jar to HDFS and reference the HDFS location.

            SparkConnection.IDBM_HIVE_DUB_CLOUDERA.native."spark.yarn.jar"="hdfs://hostname:8020/jars/spark-assembly-1.5.0-hadoop2.6.0.jar"


  • Set hadoopConfigDir and hadoopUserName, as they are mandatory.


There are two mandatory parameters that have to be set for each DSN

    • hadoopConfigDir (The directory of the core-site.xml, yarn-site.xml and hive-site.xml files for this DSN)

               Use relative paths to the Hadoop client XML config files (yarn-site.xml, hive-site.xml, core-site.xml)

              

               For e.g.

SparkConnection.IDBM_HIVE_DUB_CLOUDERA.hadoopConfigDir="../../../SparkConnector/hadoopConfig/                               IDBM_HIVE_DUB_CLOUDERA"


    • hadoopUserName (An user name with privileges to run Spark on YARN)

               For e.g. SparkConnection. IDBM_HIVE_DUB_CLOUDERA.hadoopUserName="hive"


  • It is possible to pass in native Spark parameters to set Spark properties using "native" in the property.

              For e.g. SparkConnection.MY_HDP_HIVE_DSN.native."spark.executor.instances"="4"

 

(This parameter sets the number of executors. Note that this property is incompatible with spark.dynamicAllocation.enabled. If both spark.dynamicAllocation.enabled and spark.executor.instances are specified, dynamic allocation is turned off and the specified number of spark.executor.instances is used.)

 

FF   For Hortonworks Clusters:


        Apart from the other configuration stated above, the below 2 properties are also mandatory and need to match

        the HDP version exactly

 

    • SparkConnection. IDBM_HIVE_HDW.native."spark.yarn.am.extraJavaOptions"="-Dhdp.version=2.3.2.0-2950"

 

    • SparkConnection. IDBM_HIVE_HDW.native."spark.driver.extraJavaOptions"="-Dhdp.version=2.3.2.0-2950"

 

               (Where IDBM_HIVE_HDW is the ODBC connection to the Hortonworks Hadoop system.)

 

(Note:This is one time configuration activity. You can get help from your IT administrator to set up SparkConnection.ini file.)

 

The tuning options for Spark also can be set here. This is useful when the dataset is larger than the available memory and default spark settings on cluster.

 

# (Optional) performance tuning parameters


#SparkConnection.IDBM_HIVE_DUB_CLOUDERA.native."spark.executor.instances"="4"

#SparkConnection.IDBM_HIVE_DUB_CLOUDERA.native."spark.executor.cores"="2"

#SparkConnection.IDBM_HIVE_DUB_CLOUDERA.native."spark.executor.memory"="4g"

#SparkConnection.IDBM_HIVE_DUB_CLOUDERA.native."spark.driver.maxResultSize"="4g"                    


10. Run the Native spark modeling using Automated Analytics Modeler

 

Open Automated Analytics and select classification or regression data mining method.

Select ODBC connection to the Hadoop server (e.g. IDBM_HIVE_DUB_CLOUDERA) and select the hive table as data source.

 

fig16.png

Choose an existing Hive Table using 'Use Database Table' option or an Analytical Dataset which is based on Hive tables using the 'Use Data Manager' option.

 

Click NEXT button and load the description of the dataset from a local file or click analyze to pull the metadata of the Hive table in Automated Analytics.

fig1.png

In the next screen, select the input and target variables (for classification scenario an example of target variable could be a variable which indicates which customers of a bank has credit card or not.  Credit_card_Exist (=Yes/No)).

 

Then navigate to next screen and click on generate model training.

 

Now the process will be delegated to the Spark layer. In the progress bar you will notice several processing steps that will take place in sequence.

 

fig1.png

You can observe Spark jobs in details through the application Web UI which can be started by typing http://localhost:4040 in the browser.

 

fig1.png

Configuring Native Spark Modeling doesn’t require one to code at all, with a very short amount of time, you can set up Native Spark Modeling within SAP BusinessObjects Predictive Analytics and work with Hadoop data source as you would work with any other database. From the user’s point of view, you won’t experience any difference.

 

Call for action.


For more information on how Native Spark Modeling works please refer to the blog: http://scn.sap.com/community/predictive-analytics/blog/2016/03/18/big-data-native-spark-modeling-in-sap-predictive-analytics-25.

 

Please let us know your experience when using NSM -> new abbreviation?. If you encounter any problems, feel free to ask questions in our SCN.

How about "Big Sister" instead of "Big Brother" for better Digital Government

$
0
0

When most people think of a big sister I hope they think of someone loving, nurturing and caring.  I grew up with 2 brothers so I did not personally experience this.  Yet I am sure that a good big sister grows into great mother of which I know I had the world’s best.    

I hear you ask, so what does this have to do with improving digital government?

There is a pervasive attitude that Governments cannot be trusted when it comes to its citizen’s personal data.  Many people believe there is a lack of openness and transparency from government.  This lack of transparency typically leads to mistrust. 

My first introduction to the concept of trust in government and spying on citizens came in "high school" while reading 1984 by George Orwell.  (A top 100 all-time book according to most critics)

As far as I know George Orwell introduced the world to "Big Brother"

For those who have not read 1984 by George Orwell let be introduce "Big Brother" as defined by Wikipedia:

Big Brother is a fictional character and symbol in George Orwell's novel Nineteen Eighty-Four. He is ostensibly the leader (either the actual enigmatic dictator or perhaps a symbolic figurehead) of Oceania, a totalitarianstate wherein the ruling Party wields total power "for its own sake" over the inhabitants.

In the society that Orwell describes, every citizen is under constant surveillance by the authorities, mainly by telescreens (with the exception of the Proles). The people are constantly reminded of this by the slogan "Big Brother is watching you": a maxim which is ubiquitously on display. In modern culture the term "Big Brother" has entered the lexicon as a synonym for abuse of government power, particularly in respect to civil liberties, often specifically related to mass surveillance.

https://en.wikipedia.org/wiki/Big_Brother_(Nineteen_Eighty-Four)

In the movie the "telescreens" had an image of Big Brother.  Quite creepy!

User_big_brother_1984.gif

For this blog I want to ignore the intelligence community.  I want to talk about regular government and how they can collect and use citizen data for public good.   What I would like to hear about are more government big data stories regarding improving citizen experience and outcomes and not just tackling fraud, waste and abuse.

I am sure it is fair to say that “Government” knows a lot about their citizens.  (Slightly tongue in cheek, perhaps not as much as Google, Facebook, or Amazon).  However what government know about its citizens is often of little use to its citizens.  There is not really a single entity "Government”.  Often governments do not have a single view of their citizen.  Some of the citizen data is still in paper files.  Some data is owned and stored by a social agency, other data by a tax agency or a health agency.  Some is at the Federal level, while other at State level and more at City/County level. So all told some “Government” know everything about us, but not in a single, usable place.  I would think this is a common government big data issue.  Can this be solved using a government cloud?

Think about a citizen that facing hardship.  Perhaps they were recently made redundant / unemployed.  Perhaps someone became disabled due to an accident. They are probably already stressed.  It must be very frustrating for this citizen to have to tell the government something that they already know.  In an ideal world the government should know from a prior event or data point and proactively reach out to the citizen.  The citizen does not always know or care that the date is owned or stored by another department or agency. They want government to have a single view of them.  They often see "Government" as a single entity.  They expect the same world class  "experience" that they get from their favorite retailer.   When citizens read about smart cities this experience does not make them think they live in a smart city. 

So why does the retailer or e-commerce company deliver a great customer experience?  More often than not they have a great customer profile.  They know you, they know their customers.  The same is true with a great family doctor or your favorite bar / restaurant.  They know you, what you like, what you need etc...

Can the Government build this same great citizen profile?  And do so in an open and transparent way?  Can they give their citizens access to their data?  Let the citizen verify the data the government has on them.  Enable the citizen to authorize how and what the government can use this data for.  Allow the citizens to opt in and opt out of all or certain uses of their data or even certain subsets of their data.

The benefits to the citizens who opt in would be great.  Citizens could get a world class experience when dealing with governments.  The entire experience could be personalized for them.  It could make them aware of benefits they are eligible for. It could dramatically simplify the application and filing process for many benefits and processes. It could automate or eliminate many processes.

I think we should call this “Big Sister” instead of Big Brother.  Let's separate the 2 use cases.  We should welcome a "Big Sister" approach to improve Citizen Experience for those who opt in. 

What do you think of such an approach?  Would you be okay with government using this data for your benefit? 

How to set filters for archived BP/customer during data replication

$
0
0

During data replication between ECC and CRM, the archived BPs or customers are not automatically filtered. So you would have to implicitly set the filter to achieve this.

 

1. If you want to set filters for object BUPA_MAIN in R3AC1, you could perform the following steps:

 

     1> Run T-code SM30, enter table name SMOFFILFLD.

     2> Create entry as follows. That is because that table SMOFFILFLD is the controlling table for allowed filter fields. In the standard only the mentioned table/field names are provided but you are able to add more as you see fit.


screenshot1.png

     3> Run T-code R3AC1, click on tab "Filter settings", add filter for object BUPA_MAIN:

 

screenshot2.png

This part has been pointed in my KBA 2163871 - How to set up filters for archived BP for object BUPA_MAIN

 

 

2. If you want to set filters for object CUSTOMER_MAIN in R3AC1, you could perform the following steps:


     1> Go to Tcode R3AC1 of CRM system.

     2> Choose object CUSTOMER_MAIN and click on the "Filter settings" tab.

     3> Add the filter as KNA1-LOEVM unequal to "X".


QQ截图20160628200631.png

This part has been pointed in my KBA 2336339 - How to set up filters for archived Customer for object CUSTOMER_MAIN

 

 

3. If you want to set filters in SMOEAC for archived BP, you could add the filters as below:

 

QQ截图20160703153922.png

Regarding how to generate filters in SMOEAC, you could refer to the KBA 1834681.

 

 

Here I also want to remind you the difference of the filters in R3AC1 and SMOEAC:

 

- Filters maintained in R3AC1 works for the following scenario:

>> Initial load of objects, both in the direction of ECC to CRM and from CRM to ECC.

>> Delta load from ECC to CRM.

 

- Filters maintained in SMOEAC works for the following scenario:

>> Upload from CRM to ECC.




Hope this blog could help you.


XS Advanced features: Using Synonyms; Using non-HDI container schema objects in HDI container.

$
0
0

This blog will give you information on how to use objects of a non-HDI container or stand-alone schema into your container.


A word about HDI Containers


As we enter the world of XS Advanced, we come across many new terms and one of them is "HDI container".

You can think of it as a database schema. It abstracts the actual physical schema and provides schema-less development. All the objects you create will sit in a container. You can read more about them in the blog written by Thomas Jung. Please visit http://scn.sap.com/community/developer-center/hana/blog/2015/12/08/sap-hana-sps-11-new-developer-features-hdi

 

The key points that we need to emphasize while working with the HDI containers are:

  • A database schema and a technical user also gets created in the HANA database for every container. All the run time objects from the container like tables, views, procedures etc. sit in this schema and not in the schema bind to your database user.
  • All the database object definitions and access logic has to be written in a schema-free way.
  • Only local object access is allowed. It means that you can only access the objects local to your container. You can also access the objects of other containers and non-HDI container schemas (foreign schemas) but via synonymsas long as the technical user of the HDI schema has been granted access to this foreign schema.

 

Creating Synonyms

 

Now you will be looking at an example of creating a synonym for the objects of a non-HDI container schema (foreign schema) in your container.

This example is based on SPS 12 and uses both XS command line tool and SAP Web IDE for SAP HANA (XS Advanced) tool.

 

Prerequisites:

  • You should have a database user who should be able to access XSA Web IDE tool.
  • Your database user should have the authorization (WITH GRANT OPTION) on the foreign schema.

 

Let's start with the example step by step.

 

Create a user provided service.

You have to create a user provide service for your foreign schema. Open XSA client tools and login using your user by issuing 'xs login' command.

Now create user service by issuing 'xs create-user-provided-service' or 'xs cups' command.

You can use the following syntax:

xs cups <service-name> -p "{\"host\":\"<host-name\",\"port\":\"<port-number>\",\"user\":\"<username>\",\"password\":\"<password>\",\"driver\":\"com.sap.db.jdbc.Driver\",\"tags\":[\"hana\"] , \"schema\":\"<foreign schema name>\" }"

 

 

Modifying mta.yaml file.

You have to correctly configure all services including the user provided service in the mta.yaml file. This allows using the user provided service within the project.

 

Add an entry of the user provided service you created in 'resources' section of mta.yaml file. Use the below sample code as a reference.

mta1.JPG

Figure 1: Entry of user provided service in mta.yaml file example

 

Also, add a dependency of this service in HDB module of your project. Use the below sample code as a reference.

mta2.JPG

Figure 2: Service dependency in HDB module (mta.yaml file example)



Creating .hdbsynonymgrantor file.

This file specifies the necessary privileges to access external tables. Open XSA Web IDE and under HDB module of your project create a new folder with name 'cfg', just like the 'src' folder, its name is special. This tells the HDI deployer that this folder containes configuration files and treats them appropriately.

Create your .hdbsynonymgrantor file under this folder. Sample content of this file might be:grantor_file.JPG

Figure 3: .hdbsynonymgrantor file example



Creating synonym for external object

Create a .hdbsynonym file in 'src' folder of your HDB module. In one .hdbsynonym file you can define multiple synonyms to be used in your project.

Please use the below code sample as your reference for creating synonyms.

synonym.JPG

Figure 4: .hdbsynonym file example


Now, you should be able to use those external tables in your container using these synonyms.

Fiori Analytics on HANA Cloud Platform using Smart Business Service - I

$
0
0

I have been using HANA Cloud Platform Fiori Launchpad for a while and have always wanted to see KPI tiles on my Fiori Launchpad. SAP Smart Business (SBS) framework has been around for a while and customers were able to visualize their KPIs/OPIs without writing a line a code in their on-premise solutions.

 

                   

 

I have been exploring this option for a while and I am excited to share the news that using SMART business services in HCP, it is now possible to create KPI Tiles on your HCP Fiori Launchpad. What is even more interesting is that you can also create rich analytical applications (out-of-the-box) which help you give more information around your KPIs.

 

I had earlier posted on fundamentals of KPI Modeler and how to use the generic KPI drill-down application which ships along with Smart Business Framework. It's good to see the same on-premise functionality being ported to the cloud.

 

In this series of blog post, myself and my colleague Nash Gajic are going to show you how easy it is to enable Smart Business services on HCP and bring up KPI tiles on Fiori Launchpad.

 

SSB framework on HCP relies on OData services when it comes to consuming business information. As of today, it supports OData services which are either based on HANA XS, CDS or even Apache Olingo. It is important to note that these OData services need to be annotated following OData4SAP standard in order to identify dimensions and measures on each entity.

 

       

 

In this blog, we are going to take a scenario where a customer who has Business Suite on HANA system is looking to create Analytical/Transactional apps on HCP. One of the ways to expose data from a HANA system is via XS OData services.  We are going to show how to expose XS OData services from a HANA system and consume them in Smart Business Services of HCP to create KPI Tiles.

 

    

 

Prerequisites

 

  • Access a trial landscape, you need to have a developer account first. If you still need to create a developer account, you can start from here: Signing Up for a Developer Account.

 

  • For demonstration purposes, we are going to use the HANA MDC database which is available for free on the HCP trial account rather than an on-premise HANA system.

 

Enable Portal Service

 

In your HCP trial account, locate your Portal service and enable it. Launch the service to create a fresh Portal.

 

              

 

From the site directory, add a new Portal site of Fiori Launchpad type.

 

              

 

Once the site is created, navigate to the Site settings menu and publish the empty site.

 

              

 

After publishing your site, from the site directory ensure that the new site is made your default site

 

              

 

We have now created an empty Launchpad site. In the next step, we would be leveraging the Smart Business service to publish few apps to this Launchpad site.

 

Enable Smart Business Services

 

In your HCP trial account, locate the Smart Business service and enable it.

 

              

 

Once the service is enabled, you will find many destinations created automatically in your HCP cockpit. Navigate to the destination menu of your HCP cockpit and locate the one named “flpuis”. Change the authentication type from “Basic Authentication” to “AppToAppSSO”. Currently this is a bug and will be addressed soon.

 

              

 

Navigate back to the Smart Business service and click on “Configure Smart Business Apps”.

 

              

 

In the Smart Business Configuration App, select the Portal site which was created earlier and click on “Import Apps”.

 

              

 

You should get a success message stating that the smart business apps are imported to your site.

 

Navigate back to the Launchpad site which you created earlier and launch the site. You should be able to see bunch of Smart Business Apps which are available in your Launchpad site. These Apps will allow you as a content administrator to build KPI tiles and make them available to end users.

 

 

You will notice there is a Portal role now available directly related to the corresponding Smart Business service groups shown above. As an administrator, you can assign these Smart Business service apps to your portal content developers accordingly.

 

 

In the next part of this blog, my colleague Nash Gajic will show you how to setup the data source (HANA Database) on HCP and use the Smart Business services to consume XS OData services in order to create KPI tiles and link them to generic drill down applications.

Fiori Analytics on HANA Cloud Platform using Smart Business Service 2

$
0
0

In this post you will learn how to use SAP Smart Business service on HANA Cloud Platform which will help you to visualise content in the form of analytical Fiori tiles and interactive dashboards. SAP Smart Business Service allows you to define, manage, and leverage consistent key performance indicators (KPI) and operational performance indicators (OPI) across all your business applications. You will learn how to model KPIs against analytic content exposed as OData services from HANA on HCP.


In order to create tiles that display key information from the underlying application data services. We will be using SAP HANA Interactive Education content (SHINE) deployed to our HANA Cloud Platform MDC instance. SHINE comes with pre-enabled data models, tables, views, dashboards etc. for more details on the SHINE please visit the following link: https://github.com/SAP/hana-shine

 

1.png

 

Prerequisites


  • Access a trial landscape, you need to have a developer account first. If you still need to create a developer account, you can start from here: Signing Up for a Developer Account.

 

 

Instructions

To create and configure a trial tenant database, follow these steps:

 

  1. Log on to the SAP HANA Cloud Platform cockpit and select an account.
  2. Choose Databases & Schemas from the menu on the left. Choose New.
  3. In the Database ID enter a unique ID that contains lower-case letters and numbers.
  4. In the Database System field select the HANA MDC (<trial>) option. Fill in the SYSTEM User Password field. Choose Save.
    The Events page is displayed. It shows the progress of the database creation.

 

NOTE* If there are no free resources available you will not be able to create a tenant database. You should try again later. Free Trial also offers maximum of three Databases & Schemas which you can assign to your account.

2.png


5. Wait until the tenant database is in Started state. This process might take a while.


3.png

 

Now you have your tenant database created and configured. Click on Administration Tools: SAP HANA Cockpit.

 

6. This will open-up log on screen to an Administrative console (Username: system) and your previously saved password for Database/Schema.

 

7. New roles will automatically become assigned after your confirmation.

4.png

5.png

 

8. This leads us to a SAP HANA Database Administration screen. Click on Manage Roles and Users Tile to proceed.

6.png

9. This will open up HANA Web-based Development Workbench: Security screen. Double-Click on the SYSTEM users and search through SYSTEM Users by clicking on the green + sign:

7.png

10. Under “Type name to find a role:”, select the following roles:

 

sap.hana.ide.roles::CatalogDeveloper

sap.hana.ide.roles::Developer

sap.hana.ide.roles::EditorDeveloper

sap.hana.xs.lm.roles::Administrator

sap.bc.ina.service.v2.userRole::INA_USER


8.png

11. Click on the Save icon:

9.png

12. Navigate to SHINE contents available in the github location https://github.com/SAP/hana-shine

Click on the file HCODEMOCONTENT_11.1.tgz and then select “View Raw” to download the delivery unit to your laptop.

10.png


13. Upload the delivery unit into your newly created HANA DB. Navigate back to your HANA Cockpit and click on the tile “HANA Application Lifecycle Management”.

11.png

14. Click on the Products tile and navigate to the tab “Delivery Units”

12.png

Click on “Import” button. Point to the downloaded tgz file and click on “Import”.


13.png

14.png

Confirm the import by verifying the contents and click on “Import”. The objects will be activated within a minute.

15.png

15. Navigate back to the Database Overview screen and launch the Web Development Workbench.

16.png

16. Click on “Editor” to launch the development editor

17.png

17. You should be able to find a package called “democontent” which contains lot of objects as shown below. Select the index.html file and click on the run icon as shown below.

18.png

       

18. You will be provided with a “403 Forbidden” error message. The reason being we have not added roles which have been created to access this demo content.

19.png

19. Navigate back to the HANA Cockpit and open the tile “Manage Roles and users”. Add the below role to the SYSTEM user and Save your changes.

20.png

20. We should be now able to run the index.html file available under democontent. You will be initially prompted to check Prerequisites.

21.png

21. Click on “Generate Time Data” and “Create Synonyms” and close the popup.

21-1.png

This will now provide access to pre-made applications for you to test some of the features available in HANA. Spend some time exploring few of the applications.

23.png

22. Go to your web-based HANA Development workbench screen and open up services – which will represent your services to expose data (the following image depicts some of the key parts of the design-time objects etc.)

24.png

 

23. Go to services > salesByRegion.xsodata and modify code by adding:

 

annotations {

enable OData4SAP;

}


Click Save and execute to test XSOdata

25.png

24. Go to HCP cockpit Destinations and clone BusinessSystem00 destination * which was create when you initially imported your site. *Refer to my colleague's post earlier if you are missing your BusinessSystems Destination.

27.png

Import New Destination based on the same naming convention * BusinessSystems (this will point HCP Smart Business Services to your currently exposed OData services from your SHINE content). URL should reflect your existing HCP services catalog with DB login credentials.

28.png

25. Go back to your Cloud Portal and open up your previously created Launchpad.

29.png

For the next steps go to part 3 of this blog:Fiori Analytics on HANA Cloud Platform using Smart Business Service 3.

 


Fiori Analytics on HANA Cloud Platform using Smart Business Service 3

$
0
0

Welcome to part 3 of this blog with step-by-step instructions on how to setup Fiori Analytics on HANA Cloud Platform using Smart Business Service on HANA Cloud Platform. Before starting here, make sure you have done all steps in part 1 and part 2.


In order to create tiles that display key information from the underlying application data services. We will need to configure Business Systems and set our Data Sources right.


1. Click on Configure Business Systems

30.png

2. Enter your Data Source and Click Save.

31.png

3. Click on Create KPI

32.png

4.Fill in all the mandatory fields

33.png

Point your KPI to Data Source in our case – Business System01

OData Service used is the one which we have previously annotated /salesByRegion.xsodata  - *don’t forget to Save and Activate

34.png

5. Create Evaluation for your KPI which will define what information about the KPI or OPI is visible to the SAP Smart Business user at runtime.

35.png

6. Configure the input parameters and filters.

36.png

7. You can also set values for the dimensions to act as additional filters.

37.png

8. Configure the target, thresholds, and Activate and Configure Tile. Only active evaluations are available at runtime. To visualize an evaluation, you must create a tile using the Configure KPI Tiles app.

38.png

9. The Configure KPI Tiles app allows you to create tile visualisations for an active evaluation. Click on previously created KPI Evaluation and Add Tile.

39.png

10. In the Configured Tiles view, select your groups and catalogs and Save and Configure Drill-Down.

40.png

11. Click on Configure which will allow you to set-up the generic drill-down application that you navigate to when you click the KPI or OPI tile in the runtime environment.

41.png

12. Select your Dimension – in our case Region and click on Ok.

42.png

13. This will open up Drill-Down Chart Configuration screen > Change your Visualisation Type and Title > Save View

43.png

14. Click on Save Configuration

44.png

15. Refresh your Portal screen and click on newly created KPI Tile * SalesByRegion

45.png

46.png

Congratulations, you have created your KPI using Smart Business Service on HCP!

 


An Open Source ABAP JSON Library - ZCL_MDP_JSON_*

$
0
0

Hi ABAP developers,

 

I would like to introduce a new open-source ABAP JSON library we have developed. Why the world needs a new JSON library? I will explain our rationale behind developing this library with its features. In the end it is about having more choices and knowing trade-offs. I would like to thank Medepia IT Consulting for letting this work become open-source with the MIT License.

 

Table of Contents:

  • Summary
  • Alternatives
  • Reasoning and features
  • Examples
  • Performance
  • Links
  • Warning
  • Conclusion

 

 

Summary

You can generate any custom JSON with this library unlike alternatives. Thus you can easily achieve API compatibility with another JSON server written in another language. Beside providing a serializer and a deserializer, this library defines an intermediate JSON class in ABAP. Further development may enable more JSON utilities based on this JSON representation.

 

Alternatives

CL_TREX_JSON_*

 

Standard transformation using JSON-XML:  https://scn.sap.com/community/abap/blog/2013/01/07/abap-and-json

 

Manual string manipulation: While it provides flexibility, it is tedious and error prone work. Sometimes it is used together with CL_TREX_JSON_*

 

These libraries also seek automatic mapping:

https://github.com/se38/zJSON/wiki/Usage-zJSON

https://wiki.scn.sap.com/wiki/display/Snippets/One+more+ABAP+to+JSON+Serializer+and+Deserializer

 

 

Reasoning and features

It is intriguing to me that there was no JSON node representation in ABAP. Let me give examples from other languages:

 

Working with JSON in dynamic or loosely typed languages is easier since easily modifiable representations for JSON object and array already exists in the standard language:

 

In strongly typed languages like ABAP, Java, Go there are two approaches:

 

Our library has chosen the intermediary representation approach defining the class ZCL_MDP_JSON_NODE.

 

abap_json_node.jpg

 

Features:

  • It provides flexibility down to JSON spec. This is important because you get the same flexibility as manual string manipulation without errors. So compatibility of your ABAP service or client with any other JSON API becomes possible without string manipulation.
  • You can deserialize any JSON string.
  • You can know exactly what deserializer will produce when you see a JSON string.
  • You don't need to define intermediary data types just for JSON input/output.

 

 

 

Future ideas for development:

  • Intermediary ZCL_MDP_JSON_NODE class enables development of methods like JSON equality checker, beautification of JSON output, checks for spec validity for string and number values.

 

  • The library uses regexes for parsing. Most of the time regex can be a quick solution. However, I think finite-state machines are better suited for parsers in general.

 

  • Wewill work on this library based on our needs and your suggestions. For example, we can work towards 100% compliance with theJSON specification running edge case tests.

 

 

 

 

Examples

Examples here are in the shortest form to show how easy JSON manipulation can become. There will be more examples at the project repo using other features of the class. JSON node class is easy to understand if you study attributes and methods once.

 

Deserialization Example:

 

DATA:  l_json_string TYPESTRING.

CONCATENATE

 

'{'

' "books": ['

' {'

' "title_original": "Kürk Mantolu Madonna",'

' "title_english": "Madonna in a Fur Coat",'

' "author": "Sabahattin Ali",'

' "quote_english": "It is, perhaps, easier to dismiss a man whose face gives no indication of an inner life. And what a pity that is: a dash of curiosity is all it takes to stumble upon treasures we never expected.",'

' "original_language": "tr"'

' },'

' {'

' "title_original": "Записки из подполья",'

' "title_english": "Notes from Underground",'

' "author": "Fyodor Dostoyevsky",'

' "quote_english": "I am alone, I thought, and they are everybody.",'

' "original_language": "ru"'

' },'

' {'

' "title_original": "Die Leiden des jungen Werthers",'

' "title_english": "The Sorrows of Young Werther",'

' "author": "Johann Wolfgang von Goethe",'

' "quote_english": "The human race is a monotonous affair. Most people spend the greatest part of their time working in order to live, and what little freedom remains so fills them with fear that they seek out any and every means to be rid of it.",'

' "original_language": "de"'

' },'

' {'

' "title_original": "The Call of the Wild",'

' "title_english": "The Call of the Wild",'

' "author": "Jack London",'

' "quote_english": "A man with a club is a law-maker, a man to be obeyed, but not necessarily conciliated.",'

' "original_language": "en"'

' }'

' ]'

'}'

 

INTO l_json_string

SEPARATED BY cl_abap_char_utilities=>cr_lf.

 

DATA: l_json_root_object TYPEREF TO zcl_mdp_json_node.

l_json_root_object = zcl_mdp_json_node=>deserialize( json = l_json_string ).

 

DATA: l_string TYPESTRING.

l_string = l_json_root_object->object_get_child_node(KEY='books'

)->array_get_child_node(INDEX=1

)->object_get_child_node(KEY='quote_english')->VALUE.

 

START-OF-SELECTION.

WRITE:'Quote from the first book: ', l_string .

 

 

 

 

 

Serialization Example:

 

DATA: l_string_1 TYPESTRING.

 

DATA: l_root_object_node TYPEREF TO zcl_mdp_json_node

   ,l_books_array_node TYPEREF TO zcl_mdp_json_node

   ,l_book_object_node TYPEREF TO zcl_mdp_json_node

   ,l_book_attr_string_node TYPEREF TO zcl_mdp_json_node .

 

*Create root object

l_root_object_node = zcl_mdp_json_node=>create_object_node().

 

*Create books array

l_books_array_node =  zcl_mdp_json_node=>create_array_node().

*add books array to root object with key "books"

l_root_object_node->object_add_child_node( child_key ='books'  child_node = l_books_array_node ).

 

*You would probably want to do this in a loop.

*Create book object node

l_book_object_node = zcl_mdp_json_node=>create_object_node().

*Add book object to books array

l_books_array_node->array_add_child_node( l_book_object_node ).

 

l_book_attr_string_node = zcl_mdp_json_node=>create_string_node().

l_book_attr_string_node->VALUE='Kürk Mantolu Madonna'.

*Add string to book object with key "title_original"

l_book_object_node->object_add_child_node( child_key ='title_original' child_node = l_book_attr_string_node ).

 

l_string_1 = l_root_object_node->serialize().

 

*ALTERNATIVE:

DATA: l_string_2 TYPESTRING.

*DATA: l_root_object_node_2 type zcl_mdp_json_node.

 

*Create same JSON object with one dot(.) and without data definitions using chaining.

l_string_2 = zcl_mdp_json_node=>create_object_node(

)->object_add_child_node( child_key ='books' child_node = zcl_mdp_json_node=>create_array_node(

  )->array_add_child_node( child_node = zcl_mdp_json_node=>create_object_node(

   )->object_add_child_node( child_key ='title_original' child_node = zcl_mdp_json_node=>create_string_node(

   )->string_set_value(VALUE='Kürk Mantolu Madonna')

   )

  )

)->serialize().

 

START-OF-SELECTION.

 

WRITE:/'string 1: ', l_string_1.

WRITE:/'string 2: ', l_string_2.

 

 

 

Challenge: Try doing these examples with CL_TREX_JSON_*

 

 

For more examples please visit GitHub repo.

 

 

Performance

On a test machine, using JSON string example above(l_json_string) deserializing and serializing again 10000 times takes 2.1 seconds on average. It shouldn't have any performance problems with general usage. Complete benchmark code will be on the project repo.

 

DO10000TIMES.

  zcl_mdp_json_deserializer=>deserialize(

   EXPORTING json = l_json_string

   IMPORTING node = l_jsonnode ).

 

  zcl_mdp_json_serializer=>serialize(

   EXPORTING node = l_jsonnode

   IMPORTING json = l_json_string ).

ENDDO.

 

 

 

Links

Here is a presentation about this JSON library:

Medepia ABAP JSON Library ZCL_MDP_JSON

 

Project code repository:

GitHub - fatihpense/zcl_mdp_json: Medepia ABAP JSON library that can generate and parse any JSON string.

 

Warning

The library isn't extensively battle tested as of now. Testing your use case before using it in production is strongly advised. Please report if you encounter any bugs.

 

 

Conclusion

If you are just exposing a table as JSON without much modification, it is easier and probably better to use CL_TREX_JSON_*

If you are developing an extensive application and if you want to design your API beautifully, this library is a pleasant option for you.

 

Thanks for reading.

 

Best wishes,

Fatih Pense

SAP S/4 HANA: Simplifications in Sales & Distribution Data Models

$
0
0

SAP S/4 HANA is the new offering from SAP built on the high performance In-memory platform HANA with an enriching user experience using FIORI apps.This new system includes major changes and massive simplifications ranging form changes in underlying data models to new user interface through FIORI apps.

 

Objective:

 

The objective of this blog is to understand the data model simplifications in SD area through a comparative study with a non S/4 system.

 

Below are the major simplification points:

 

  • Status tables VBUK and VBUP have been eliminated and the new status fields have been added to:

 

    • VBAK and VBAP for sales order header and item
    • LIKP and LIPS for delivery document header and item
    • VBRK for billing document header

 

  • Simplification of document flow table VBFA

 

  • Document index tables like VAKPA , VAPMA etc. have been done away with.

 

 

Comparison:

 

Let us understand the differences by looking at the table structures:

 

  • VBUK and VBUP are still present in S/4 HANA but they are not filled when an order is created.
  • The status fields have been added through append structure.

 

Table NameDescriptionAppend Name
VBAKSales Order HeaderVBAK_STATUS
VBAPSales Order ItemVBAP_STATUS
LIKPDelivery HeaderLIKP_STATUS
LIPSDelivery ItemLIPS_STATUS
VBRKBilling Document HeaderVBRK_STATUS

 

  • If any custom fields have been added to VBUP or VBUP tables in source system, they will have to be added to respective document tables in S/4 HANA system.
  • An append field to the document header status table VBUK must be added to one or several of the document header tables VBAK, LIKP, or VBRK. This decision depends on which of the document types the respective field is relevant for.
  • An append field to document item status table VBUP must be added to one or more of the document item tables VBAP or LIPS.

 

 

VBAK:

 

1.png

 

VBAP:

 

2.png

 

When a sales order is created in  a non S/4 HANA system:

 

Sales Order: 7137:

 

3.png

 

VBAK table holds the order header details

4.png

Order status information is present in VBUK table

5.png

 

Sales Order in S/4 HANA system:

 

9.png

 

VBAK table contains both the header details as well as the header status

6.png

 

Header status fields are now added to VBAK table

7.png

Table VBUK is present but is not filled.

8.png

 

 

Reason VBUK / VBUP are still present in S/4 HANA:

 

Since the status tables are not being filled it can be questioned why the tables are still present and not removed from system altogether. The reason for this is to enable smooth transition for those opting for migration to S/4 HANA system.

 

New function modules have been written which read the document status fields from VBAK , LIKP aur VBRK tables and populate in output whose structure is similar to that of VBUK.

 

For example FM SD_VBUK_READ_FROM_DOC is a new FM that fills VBUK structure for one SD document out of document header tables. The data is fetched depending upon the document type.

 

10.png

 

  • The output structure ES_VBUK still refers to VBUK table.
  • Any custom fields added to appends like VBAK_STATUS will be read by this FM.

 

 

Advantages:

 

  • Reduced memory footprint : Since fewer tables are involved this means reduction of redundant data and simplified document flow.
  • Now if we need to query the order table on the basis of document status we can simply do so by 1 single query on VBAK table instead of join on VBAK and VBUK.

 

NOTES:

 

  • OSS Note 2198647 can be referred for further information.

 

Acknowledgment:

 

  • Reference taken from Simplification List for S/4 HANA 1511 0302.

 

 

Suggestions and Comments Welcomed !!

 

 

~Tanmay

SAP Single Sign-On 3.0 Now Available

$
0
0

On July 4, 2016, SAP released the latest version of the SAP Single Sign-On product. Release 3.0 expands the existing coverage for mobile and cloud scenarios, modernizes the X.509 certificate-based scenario, simplifies implementation through close platform integration, and offers continuous improvement of security protocols based on market requirements, among other new features and enhancements.

 

SAP Single Sign-On 3.0 continues to offer the sophisticated security functionality customers are looking for while placing a strong emphasis on simplification and a sustainable return-on-investment. Now let’s take a closer look at the new capabilities with SAP Single Sign-On 3.0.

 

 

A Look at the New Features in SAP Single Sign-On 3.0

 

Enhanced Support for Existing PKI Implementations

 

With SAP Single Sign-On 3.0 the Secure Login Server can now act as Registration Authority (RA) while your existing enterprise PKI acts as Certificate Authority (CA), both for user and server certificates. This means that if you already have an enterprise PKI in place, you don’t have to establish a second one. Certificates can be signed based on your established PKI and security policy, and your storage and revocation processes remain valid.

 

For more information, read Stephan Andre’s blog SAP Single Sign-On 3.0 - Secure Login Server with Enterprise PKI.

 

 

Streamlined Certificate Lifecycle Management for SAP NetWeaver AS ABAP

 

SAP Single Sign-On 3.0 introduces more efficient management of the certificate lifecycle. The Secure Login Server administration console helps administrators manage the lifecycle of certificates by automating renewals for server components in your landscape. This significantly reduces manual effort, eliminates the risks of human errors, and prevents costly system downtime.

 

An automated central roll-out of trusted root certificates facilitates the transition from self-signed certificates to a PKI-based approach. In addition, the Secure Login Server can act as Registration Authority of an existing enterprise PKI (see above).

 

To see the configuration of Secure Login Server and certificate lifecycle management in action, watch our new demo videos: Part 1, Part 2.

 

 

Expanded Single Sign-On Support for Mobile Devices

 

The Secure Login Server allows you to provision X.509 certificates to mobile devices in multiple ways. In the past, you could use the Simple Certificate Enrollment Protocol (SCEP), which is supported by iOS. SAP Single Sign-On 3.0 now also supports the provisioning of X.509 certificates to a mobile device via the SAP Authenticator mobile app for iOS. You can now even develop your own custom code for certificate enrollment using the REST API provided by the Secure Login Server. Check out Stephan Andre’s blog SAP Single Sign-On 3.0 - Secure Login Server REST API for an example.

 

Optionally, customers can integrate Secure Login Server and the SAP Mobile Platform, and benefit from a seamless user experience for mobile applications. For more information, see Martin Grasshoff's blog.

 

In addition, SAP Single Sign-On 3.0 now also offers a mobile SSO solution for shared mobile devices. The solution is currently available via the SAP Authenticator app for Android and is based on NFC reader technology. For more information, read Donka Dimitrova’s blog SSO Solution Also for Shared Mobile Devices.

 

 

New Encryption-Only Mode to Ensure Secure Communication, Always

 

The new encryption-only mode of SAP Single Sign-On 3.0 enables network encryption for the SNC protocol used for communication with SAP systems, even if a user-specific security token is temporarily unavailable or not yet configured. This allows customers to immediately protect data communication during an implementation project, before user-specific configuration is in place, and to ensure data privacy if the end user has lost the smart card holding the required digital certificate, for example.

 

 

New, Integrated Secure Login Web Client

 

SAP Single Sign-On 3.0 comes with a new version of the Secure Login Web Client, based on a renovated architecture and more integration options. With the help of the Secure Login Web Client, a business process running in a browser session — either in the cloud or on-premise — can trigger seamless authentication for a native client on the user desktop, such as SAP GUI.

 

As of SAP Single Sign-On 3.0, the Secure Login Web Client no longer depends on Java or ActiveX, eliminating previous limitations around browser support. For more information, read Regine Schimmer’s blog Secure Login Web Client (SLWC): Future-Proof Architecture Update.

 

 

Enhancements for Cryptographic Capabilities and Security Protocols

 

SAP Single Sign-On 3.0 now also supports Perfect Forward Secrecy for SNC communication, mitigating the risk that compromised keys allow an attacker to decrypt previously recorded session data. In addition, the new release supports the SSL/TLS cipher suite “TLS_FALLBACK_SCSV”, ensuring better protection against protocol downgrade attacks.

 

 

From Release 2.0 to 3.0: Simple Update Process

SAP Single Sign-On 3.0 is a non-disruptive, evolutionary release building on a stable core. The stability of the core and the simplicity of the product remain our key objectives, keeping implementation efforts and TCO as low as possible.

 

So if you are already using release 2.0 today, what can you expect when updating to release 3.0? With the new version we offer a lean update process through a compatible functionality set with extended functionality being optional. What this means in practice for you:

 

  • Version 3.0 continues to support all capabilities of version 2.0. The fundamentals of the main scenarios remain unchanged; an implementation started on version 2.0 does not need to be repeated or adapted on version 3.0.

 

  • Version 3.0 allows customers to extend the coverage of their existing implementation to additional scenarios. The new capabilities are optional and can be enabled any time.

 

  • Updating product components from version 2.0 to 3.0 is as easy as a patch. Versions 2.0 and 3.0 are interoperable. This means that as long as no version 3.0 specific functionality is required, components can be updated in any order.

 

 

Don’t Miss our Upcoming Webinars

Get up to speed on the enhancements and simplifications that are available in the new version 3.0! Join us for one of the upcoming webinars “Simple Steps towards Higher Security with the new Release SAP Single Sign-On 3.0”, depending on your location and language preference:

 

  • July 8, 10:00 AM CET (German). Hosted by the German User Group (DSAG), Working Group Identity Management & Security. Please note that you need to be a DSAG member in order to join the webinar.

 

 

  • July 15, 02:00 PM CET (English). Hosted by the International Focus Group (IFG) for SAP Security, Data Protection & Privacy.

 

  • August 26, 10:00 AM CET (German). Due to high demand, we will offer this additional webinar. To register please contact Christian Cohrs by August 19. Dial-in information will be sent out on August 22.

 

 

SAP Single Sign-On @ SAP TechEd 2016

Also visit us at SAP TechEd 2016 where you will have the opportunity to gain insight into security products from SAP. Learn more about our proven SAP Single Sign-On product and its latest enhancements in the following sessions:

 

  • SEC103, Simple Steps Toward Increased Security with the New SAP Single Sign-On 3.0 (Lecture)
  • SEC163, Protect your SAP Landscape with X.509 Certificates Using SAP Single Sign-On (Hands-On Workshop)
  • MOB360, Enable SAP Single Sign-On for SAP Fiori Apps (Hands-On Workshop)
  • SEC819, Road Map Q&A: SAP Single Sign-On

 

Register for SAP TechEd 2016 at the following locations:

 

Las Vegas, September 19-23, 2016

Bangalore, October 5-7, 2016

Barcelona, November 8-10, 2016

 

 

More Information

For more information about the SAP Single Sign-On 3.0 release, check the following resources:

 


#askSAP Social Media Analysis

$
0
0

Last week SAP held its quarterly #askSAP call focusing on BI core solutions.  See #askSAP Innovation in Core BI Solutions Call Notes | SCN for a recap.

 

Below is a quick analysis of what happened on social media, looking at a 7 day time period of the #askSAP tag.

 

1loc.png

 

Figure 1

 

Figure 1 is looking at the location of the person tweeting.  So it is mixed - could be a continent (North America), country (South Africa), state (Virginia), or city (Chicago).  It is not consistent, but not surprising that North America would have the most "tweeters".  In the time period selected, I found 159 unique twitter ID's using the #askSAP tag.

 

From South Africa I saw Louis De Gouveia tweet along with zimkhita buwa

 

2timezone.png

 

Figure 2

 

Time zone of tweets is interesting, as it varies.  Again it is not surprising that Pacific time has the top counts, given the SAP Analytics twitter account is based there.

3toptweets.png

Figure 3

 

Figure 3 shows who was tweeting the most with the tag.

4tweetsbyday.png

Figure 4

 

Since the webcast was on June 28th, it is not surprising that most of the tweets were on that day.

5source.png

Figure 5

 

Source of the most tweets - usually at events, it's the iPhone, but since this was a webcast, the twitter for web client had the most tweets.  Twitter for Android was number 2.

 

6texttype.png

Figure 6

 

Figure 6 shows rudimentary text analysis of the tweets, with most tweets about the topic and a few about the products

7textanalysismentions.png

Figure 7

 

Figure 7 shows mentions of twitter ID's in the tweets.  Many of the speakers are mentioned but also attendees like Louis de Gouveia of South Africa

 

9tagcloudtopic.png

Figure 9

 

Figure 9 shows a word cloud of tweets by topic.

 

The most retweeted tweet, at 23, was this one:

 

The most favorited tweet at 13 favorites was this:

Introduction to SAP Business Suite 4 SAP HANA

$
0
0

This post aim to give to the members of this community that are not fully familiar with what S/4HANA is and it is looking for more information before going deep into this journey and going for more technical details.

This is one of many posts that I am planning to share with this community.

SAP Business Suite 4 SAP HANA, S/4 HANA or S/4HANA (all together) is a new Business Suite officially announced by SAP in 2015.

Time to re-build the Business Suite for the digital world

Capture2

(image source: SAP)

SAP has been enhancing their business applications according to the technology and 2015 was the year that a revolutionary business suite was officially announced. Thanks to the new technologies in Hardware that brought massive processing power and speed with costs decreasing, huge memory and multicore processors, SAP was able to redesign their existing products and that originated the SAP S/4HANA.

The S/4HANA is a core component of a whole new “next generation” of SAP products.

SAP S/4HANA – A reimagined suite to reimagine business

Business Scope of release 1511:

  • Finance: Accounting Operations
  • Finance: Unified Ledger & Fast Close
  • Manufacturing: Production Processing & Subcontracting
  • Manufacturing: Quality Management
  • Supply Chain: Production Planning
  • Supply Chain: Inventory
  • Sourcing & Procurement: Operational Procurement
  • Sourcing & Procurement: Contracts and requisitioning
  • Sales: Sales Order Fulfillment & Returns
  • Service: Project and Service Management
  • Human Resources: Time Management

The Digital Core

(image source: SAP)


Companies can start their journey with basic components and grow later adding new products for a more robust enterprise management.The S/4HANA, like the SAP ERP ECC is the main product that will support all core business process in the company, like P2P, OTC and etc.In combination with applications such as SAP Hybris, SAP Success Factors,  SAP Ariba, SAP Fieldglass, Concur and the Internet of Things projects, SAP offers a digital value network, which interconnects all aspects of the value network in real-time to drive business outcomes. Those are some of the other business Solutions that can be added at any time to provide the best experience and technology to manage the many different business in a company.The S/4HANA is built on SAP’s advanced in-memory platform, SAP HANA, and runs ONLY in a HANA DB, as no other DB can provide the speed and technology required to achieve the level of experience that this new Suite expects to deliver.SAP HANA DBBefore we go in more details on S/4, let’s talk briefly about the HANA DB.



s4-hana-6-638

(image source: SAP)


Years ago SAP started researching on how to develop their products and applications to run over an in-memory database, but then they realized that the Database expert companies couldn’t deliver what SAP wants, then SAP began their on in-memory database development.They had to work very closer to the market leading chip manufactures in order to find the optimal design to the database that could explore the full power of the next generation of processors.In 2011 the SAP HANA was finally announced and it was available as a standalone data mart solution to allow customers capture data from any source in real-time, load that data to in-memory database and build the BI reports and applications on top of it.Then SAP did offer the SAP HANA as an Accelerator to be deployed as side-car engines to run parallel to the SAP ERP in critical business functions, yet performing slowly. Meanwhile SAP did start developing new applications completely powered by SAP HANA (for example Smart Meter Analytics).Then in 2012, SAP migrated the an already existing application to the SAP HANA, the SAP BW, and this was followed with the SAP Business Suite, names Suite on HANA (SoH), not S/4HANA.As mentioned already, 2015 was the year that SAP announced the S/4HANA, after completely rewriting the ECC. Different from SoH, the S/4 is a brand new code-line that works only on SAP HANA and can’t work with any other vendor’s database, like the SoH, mitigating the limitation of those DBs and exploring 100% of the SAP HANA capabilities.


Key Aspects of SAP S/4HANA

Screen Shot 2016-06-28 at 3.34.56 PM

(image source: SAP)


So, what changed then?It is hard to explain it in detail when I am proposing to just introduce the S/4HANA to you. But I can explore in future posts going in a deeper detail.

  • The S/4HANA is natively built on SAP HANA, and because of that, it adheres to the capabilities of this powerful application platform and database management technology, which includes predictive analytics, advanced text mining, real time decision support and etc..
  • A new UI that offers personalized user experience with SAP Fiori and can deliver the same level of experience targeting in productivity in different devices, such as computers, laptops, tablets, smartphones and etc.
  • The data model was simplified reducing and vanishing unnecessary tables and the data stored and subsequently the data footprint was significantly reduced, simplifying the application design and extensibility.
  • It can be deployed in the cloud or on-premise.
  • S/4HANA Cloud Edition* is delivered in different flavors:
  • Enterprise Edition-based ERP with Cloud extensions (SuccessFactors, Employee Central, Fieldglass, hubris, Concur and Ariba)
  • Project Services Edition, focused on professional services with Cloud extensions (SuccessFactors, Employee Central, Fieldglass, Concuren Ariba)
  • Marketing Edition Customer Engagement & Commerce
  • Real-time OLAP capabilities inside the ERP system allows you to run all your operational reports directly in the source data using SAP HANA Live.
  • Fiori Smart Business Cockpits using SAP HANA Live providing real-time insights in real-time Business Processes.
  • Free Text searches – google-like experience in ERP improving for example customer service by reducing customer response times with regards to sales orders.
  • Eliminate batch processes and enable real-time operations like in Plant Maintenance

 

*S / 4HANA Cloud Enterprise Edition, provides a state-of-the-art base to completely innovate the business and digitize. Processes and information flows are greatly simplified and employees can work proactively and accurate contextual and personalized real-time information. By HANA Cloud Integration platform to be easily interfaces with other SAP cloud solutions enabling the digitization of all processes in the chain is quickly realized.

Screen Shot 2016-06-28 at 6.25.24 PM

(image source: SAP)

Due to the new data model, SAP S/4HANA replaces successively the old code line with a new code line, which is free from limitations of traditional databases and allows SAP, partners and customer developers to explore and maximize the benefits of the in-memory database technology.

SAP HANA in-memory technology can be defined as in-memory first processing instead of replicating a subset of data from disc to memory.

S/4HANA comes by the built in capabilities of SAP HANA. You can notice that in the new S/4HANA applications:

Application Services: As well as a database, SAP HANA can also provide many application services. This means many applications can be built in a 2-tier model, rather than a 3-tier model.

Processing Services: SAP HANA can handle many new types of data. This includes text, spatial, graph and more. But is not enough to simple store these new data types we need to be able to build applications that can process and integrate this data with traditional data types, such as business transactions. SAP HANA provides native in-memory engines that process any types of data in real time.

Integration Services: SAP HANA has multiple data consumption options built in. We can analyze continual streaming data, read data remotely in any data source, read Big Data stores such as Hadoop, synchronize in both directions with remote databases and devices that collect data (IoT). SAP HANA has built in Extraction, Transformation and Loading (ETL) capabilities so that separate software is no longer needed to clean, enrich and profile data from any sources.

Database Services: SAP HANA is a full in-memory column and row store database that can support both OLTP and OLAP requirements and is built to run on high-end hardware. It stores data optimally using automatic compression and is able to manage data on different storage tiers to support data ageing strategies. It has built in high availability functions that keep the database running and ensure mission critical applications are never down.

I hope you find this information useful... See you on my next post.

Leandro da Pia Nascimento

Webinar: Standing at the Edge: Streaming Analytics in Action

$
0
0

Eric Kavanagh from the Bloor Group hosts this webinar on July 26th to discuss how the disruptive force of the Internet of Things presents a new breed of opportunity. Tracking sensor and machine data can deliver considerable insight, but it can also generate the type of intelligence that will transform decision making and business operations. To achieve this, organizations must create an architecture that collects, analyzes and responds to events as they happen.

 

Register for this episode of The Briefing Room to:

Hear the virtues and challenges of analytics at the edge from veteran analyst, Mark Madsen.

Learn about HANA Smart Data Streaming, a solution designed for reliable streaming data capture and real-time analytics with Neil McGovern.

See a demo by Tim McConnell showing how the platform can perform complex event processing and monitoring over the Internet of Things.


Register today- we look forward to seeing you on July 26th!

How to maintain Document Type and Status Schema in Fine-Tune and it's determination in Ticket.

$
0
0

Please refer the blog which explains :How-to create Z - Document Type and Status Schema in the Fine-Tune and its determination in Ticket processing ::-


  1. Go to the Business Configuration work center ->Implementation Projects view.
  2. Select the current project and click on Open Activity List.
  3. Go to Fine-Tune and search for the activity name Tickets for Customer Support and open it.
  4. Click on hyperlink Maintain Status Schema.
  5. You can use the Default Status Schema "OR" you can create your own schema(Z Status Schema) .
  6. Once Z-Status Schema is added ,you can "Assign Status to your Schema" (and set Sort Sequence for them).

 

sc3.PNG

 

 

***** Please Note :: You can create your Z-Status too using a 'Maintain Status Dictionary Entries' hyperlink

with require 'Assignment Status' .For e.g. As I had created ZX and ZV with Assignment Status as Planner Action

and Provider Action respectively.

 

  1. Once Status Schema is maintain, close it and open another hyperlink i.e. Maintain Document Types .
  2. Here you can select existing Document OR created/Add your own Z-Document Type (For e.g. I created with

       name as ZCS - Document Type_CS) AND had assign Status Schema to it i.e.ZS - Customer Support

      

  • As you can see in the below screenshot:-

CCC1.PNG

 

  1. Go to Service work center ->Tickets view ->Create New Ticket with Document Type as Document Type_CS.
  2. Maintain all the mandatory details and further Save and Open the newly created Ticket.

 

On Edit you can see the various status are shown in the drop-down option ,as was maintained in the Fine-Tune.

 

sss1.PNG

 

****Please Also Note(below Conditions) :When Planner Action / Provider Action related Status will not be listed

in the drop-down.

 

### 1 .Provider Action status not displayed if:-

• Ticket is Inconsistent

• Approval status is In Approval or Not Started or In Revision or Withdrawn.

• Ticket is Open or Completed or Closed

 

=>As you can see Provider Action is missing in above ticket -status drop-down ,because the status of Ticket was Open.

 

### 2. Planner Action status not displayed if:

• Ticket is Inconsistent.

• Ticket is Completed.

SAPUI5 e Fiori: Usando WebIDE offline

$
0
0

Olá turma brazuca de SAP, hoje faço um texto para falar sobre a versão offline do SAP WebIDE.

 

No texto anterior SAPUI5 e Fiori: Primeiros Passos Práticos vimos como acessar a versão online do WebIDE. Mas se eu estiver sem internet? Fico impedido de programar? A resposta é não, para isso lançaram a versão offline.

 

Versão esta muito fácil de baixar instalar, veja só:

  • Primeiro faça o download do arquivo zip  aqui . O arquivo zipado tem 230 Mb.

 

O capítulo Installation and Setup tem as instruções para instalar o SAP Web IDE. Mas como está em inglês e queremos as informações em pt-BR, segue breve tradução do trecho:



Instalando

Pré-Requisitos: Instalar Java Runtime Environment (JRE) versão 7 ou superior.


  1. Fazer download da versão trial (link informado acima);
  2. No link, clicar em Trial Version. Um email com o link do download será enviado para você.
  3. Na pagina de downloads acessada pelo link, baixe SAP Web IDE local installation for windows
  4. Descompacte o arquivo zip em C:\SAPWebIDE

 

Iniciando

  1. Depois de extrair o conteúdo do zip, acesse a pasta e execute o arquivo orion.exe
  2. Usando o browser Chrome, acesse a url http://localhost:8080/webide/index.html
  3. Se for a primeira vez que você estiver acessando, será necessário criar nova conta. Acesse Create New Account e informe usuário e senha.

 

É isso. Simples assim. A versão offline não tem todas as opções e configurações da versão online, como criar conexão com o backend usando HCP. Mas tem opções e componentes o suficiente para criar apps. Façam bom proveito e bons estudos!

 

Mais detalhes em SAP Web IDE - Local Trial Version

O autor  Oliver Graeff é membro do time de desenvolvimento de Fiori e UI5 da SAP

Viewing all 3611 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>