mmoonneeyy.info Lifestyle SAP HANA AN INTRODUCTION PDF

Sap hana an introduction pdf

Tuesday, April 9, 2019 admin Comments(0)

For Any SAP / IBM / Oracle - Materials Purchase Visit: mmoonneeyy.info OR Contact Via Email Directly At: [email protected] SAP HANA. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP affiliate. This guide is organized as follows: ○ Introduction and overview. ○ SAP HANA architecture. Describes the basic capabilities and architecture.


Author: DARLINE SCHENFELD
Language: English, Spanish, Arabic
Country: Tuvalu
Genre: Fiction & Literature
Pages: 376
Published (Last): 02.09.2016
ISBN: 195-3-19841-542-2
ePub File Size: 16.65 MB
PDF File Size: 16.19 MB
Distribution: Free* [*Regsitration Required]
Downloads: 36532
Uploaded by: ELBA

With cutting-edge coverage of SAP HANA smart data access, SAP HANA Vora, and more, PDF (41 MB), EPUB (55 MB), and MOBI ( MB) file for download, . data platform is the SAP HANA database, which is fundamentally different from any other database engine in the 1. PART 1: SAP HANA – INTRODUCTION. nology was referred to as the XS engine, or simply as SAP HANA XS; Installation and Update Guide, available for download in PDF format.

The Missed Opportunities with Information Silos Many organizations already rely on spatial data processing and use specialist applications alongside, but separately from their business process applications. SAP manages the entire solution. This hard coded condition would need to be maintained in each calculation view. You can skip this step if you have already logged on. The benefit of using decision tables is that you do not have to lock in any business rules into a calculation view or procedure that might need to be changed often.

Working with Interfaces for Administrators and Developers 47 Exercise 1: Creating Information Models 93 Exercise 3: Create Calculation View — Dimension 99 Exercise 4: Create Calculation View — Cube Lesson: Defining Text Search and Analysis Lesson: Explaining Predictive Modeling Lesson: Describing Graph Processing Unit 4: Describing Data Provisioning Tools Exercise 5: Continuing your Learning Journey Lesson: More and more services are moving online and this is set to grow exponentially.

Forward thinking organizations are already taking steps to adapt to the new digital world and grow their businesses. The Growth of the Digital World The exponential proliferation of mobile devices, social media, cloud technologies, and the large amount of data they generate has transformed the way we live and work.

Sixty one percent of companies report that the majority of their people use smart devices for everything from email to project management to content creation. All of this creates unprecedented opportunities for all organizations to grow their businesses by exploiting the connectivity of consumers and business partners, tapping into the depth and variety of new types of data, acquiring this data in real time for real-time decision making, and developing innovative new applications quickly.

Consumerization is driving expectations of what business IT should offer for its users. As users become familiar with smart consumer applications they also demand real time applications, and new, innovative applications that enable deep insight and provide proactive decision support in their jobs.

We cannot just keep adding more complexity to existing IT landscapes in the hope we can keep pace with trends. What is needed is a fresh start, time to start with a blank canvas and rebuild the business systems from the bottom up using only the latest technologies aligned to the modern digital world.

The Problem with Current Landscapes Typical IT landscapes have developed over time into multiple complex arrangements of purchased, acquired with developed applications, powered by multiple platforms. These platforms can be based on incompatible hardware from different vendors, with different operating systems and different databases, and even different development languages. To try and pull together these different applications we added extra applications.

The IT department has been responsible for the integration of these systems.

Moving, harmonizing, and cleaning data, results in multiple copies of that data. We have placed huge demands on system resources during batch processing and expect users to wait for long running processes such as financial close, consolidations, and Materials Requirement Planning MRP. Complex landscapes create fragmented business views of data. To obtain a holistic view, users are required to wait until consolidation is complete.

Developing new applications in a complex landscape is also difficult, it takes time and is expensive to build and maintain. There is too much IT complexity in most organizations, complex landscapes are costly to maintain with multiple skills needed.

Complexity is stifling growth and suppresses agility and innovation, which is critical in today's digital world in order to survive. One Platform for all Applications The answer is to have all applications powered by one high performance platform.

SAP HANA Tutorials and PDF training materials

This means a common architecture with only one store for all data regardless of type. Data is available to all applications in real time, no more data movement and no more management of multiple data stores. This means only one copy of data is needed for any type of access. Traditionally, systems were either optimized for transactions or analysis. Analysis systems took on a different design approach. The hardware, database and data models were built around batch loading, aggregated storage and a focus on read intensive queries.

No movement of data is necessary and we always work from the same single copy of the data for any requirement, whether transactional or analytical. Advances in Technology How can one platform handle all applications and why did we not do this earlier? SAP HANA takes full advantage of the recent trends in hardware evolution to ensure it is able to handle such an ambitious challenge.

Let's start with memory. Historically, the high cost of memory meant that only small amounts were available to use. This caused a serious bottleneck in the flow of data from the disk all the way to CPU. It did not matter how fast the processor was if the data can reach it quickly.

We now have access to huge amounts of cheap memory. With so much memory available we can store the entire database, of even large organizations, completely inside memory so we have instant access to all data and we eliminate wait times. We can lose the mechanical spinning disk and the latency it brings and rely on memory to provide all data instantly.

Memory is no longer the bottleneck it once was. To address large amounts of memory we need 64 bit operating systems. Let's now consider the CPU. In addition to huge memory, processors continue to improve at a phenomenal rate. We now have high speed multi-core processors that can take on complex tasks and process them in parallel.

This means response times for even the most complex analytical tasks, such as predictive analysis can be carried out in real-time. So if we have multiple CPUs each with multiple cores we have access to huge processing power to consume and process huge volumes of data in a minimal time. Advances in the design of on-board cache means that data can pass between memory and CPU cores rapidly. In the past, even with large amounts of memory, this was still a bottleneck as the hungry CPUs were demanding more data and the journey from memory to CPU was not optimal.

And with modern blade server architecture, we can now easily slot in more RAM and more CPUs into our landscape to add more processing power or memory in order to scale up to any size. Introduction to SAP HANA SAP could have just kept same business application software that was written 20 years ago along with the traditional databases that supported them and installed all this on the new hardware. There would be some gains but traditional databases and applications were designed around old, restricted hardware architecture.

This means they would not be able to fully exploit the power of the new hardware with all the new developments we mentioned earlier. Put simply, the business software needed to catch up with advances in hardware technology, and so a complete rewrite of the platform was required. The platform is the software side of the equation that was built entirely by SAP.

This means many applications are built in a two tier model, rather than a three tier model. For example, imagine an application that allows a project manager to quickly check all team members have competed their time sheets. This could easily be developed as a web application where only a web browser and SAP HANA is required, no application server is needed.

Everything the developer needs at design time is there, and what is needed at run time is also there. This includes text, spatial, graphic, and more. However, it is not enough to simply store these new data types, we need to be able to build applications that can process and integrate this data with traditional data types, such as business transactions. It stores data optimally using automatic compression and is able to manage data on different storage tiers to support data aging strategies.

It has built in high availability functions that keep the database running and ensure mission critical applications are never down. Further data footprint reductions are achieved because, we removed unnecessary tables and indexes. We also reduce the in-memory data footprint by implementing data aging strategies. The benefit of this is that data that is used less frequently can be moved automatically from HOT to WARM store so we are not filling memory with data that is less useful.

However, this data is still available whenever it is needed. Technically we could do that, but it would not be efficient. Most business applications refer to only a small subset of data for their day to day running, and that is typically the most recently created data. We also use temperatures as an easy way to describe where data fits on the scale of usefulness. Active or hot data is the data that is very recent, or perhaps data that, although old, is the focus of a current analysis and is being processed.

Passive data, usually called warm data, is useful data but less used. Cold data is rarely, if ever, used. In traditional systems data was either hot in the database or cold archived outside the database.

There were usually never multiple temperatures of data due to the limitations of the technology at that time. Big Data is a term often used, and this refers to the staggering amounts of data that is being collected, especially by machines, sensors, social media, and so on.

In recent years, solutions have been developed for the storage of this type of data. One of the most popular solutions is called Hadoop. Hadoop is not a relational database, and its key role is to provide data storage and access to systems that require the data. Hadoop and other Big Data solutions should be considered in the overall planning for data management.

Push Down Processing to SAP HANA In the past, the key job of the database layer was to listen out for requests for data from the application server and then send that data to the application server for processing. Once the data had been processed the results would be sent back down to the database layer for storage. This is done quickly using in-memory. Detailed data was summarized into higher level layers of aggregates to help system performance.

SAP HANA: An Introduction (SPS 12) | Book and E-Book - by SAP PRESS

On top of aggregates, we built more aggregates and special versions of the database tables to support special applications. As well as storing the extra copies of data, we also had to build application code to maintain extra tables and keep it up to date.

A backup to these extra tables was also required, so even the IT operations were impacted. In addition to aggregates, we have another inefficiency that we need to remove. Database indexes improve access speed because they are based on common access paths to data. But they need to be constantly dropped and rebuilt each time the tables are updated.

So again, more code is needed to manage this process. The traditional data model is complex, and this causes the application code to be complex. With a complex data model and complex code, integration with other applications and also enhancements are difficult, and simply not agile enough for today's fast moving environment.

We do not need pre-built aggregates. SAP HANA organizes data using column stores, which means that indexes are usually not needed - they can still be created but offer little improvement. As well as removing the aggregates and indexes from the database, we can also remove huge amounts of application code that deals with aggregates and indexes. We are left with a simplified core data model and also simplified application code. Now it is much easier to enhance the applications and integrate additional functions.

Choice of Configurations For on-premise deployments, SAP HANA is delivered as a brand new, all-in-the-box application where all software and hardware is provided and fully configured by certified partners. There are many different configuration options available to suit all sizes of organization. Many customers already have hardware components and also software licenses that they would like to re-purpose and so this flexible approach ensures implementation costs are kept to a minimum.

This restriction does not apply for non-production installations, for example, development, sandbox. The following versions of Linux are supported: Flexible Deployment Options Cloud On Premise Hybrid Run all applications in Run all applications on Leverage right deployment option the cloud premise that meets business priorities Figure On-premise means the entire solution, the software, network, hardware is installed and managed by the customer. A cloud deployment is managed by SAP and other hosting partners and this means customers do not have to be concerned with managing the infrastructure, they can simply get on with using and developing applications with SAP HANA.

Another possibility is a hybrid approach where a combination of on-premise and cloud is used. SAP HANA is capable of handling any type of application from analytical, transactional, consumer facing, back office, real-time, predictive, and cloud and more. SAP HANA is Central to SAP's Strategy With a single, scalable platform powering all applications, customers have an opportunity to simplify their landscapes and also to develop new, innovative applications that cover all data sources and data types.

The real value in the virtual data models is the business semantics added by SAP. Raw database tables are combined and filters and calculations added to expose business views ready for immediate consumption with no additional modeling needed. So instead of having to refer to multiple raw tables in your reporting tool, creating joins and unions manually, applying filters to add meaning to the data, you simply call a view from the virtual data model and the data is exposed.

Hana pdf sap an introduction

Whilst they are different technical approaches, they both deliver the same outcome, a virtual data model that exposes live operational data for analytics. This could be achieved in a variety of ways using standard SAP data replication tools. Connect loT with Core Business Processes Traditional business systems are simply not ready to support the massive growth in device connectivity that is proposed by the Internet of Things loT.

Imagine having access to detailed machine data a few clicks away from a business transaction. Let's consider this scenario: A customer is disputing an item on their invoice and complains that the paint we supplied is too lumpy. So we drill down from the invoice, discover the actual line that relates to the paint problem, we drill down to the batch that we supplied, then we drill down to the shop floor data to check the recipe for the paint was correct.

But wait, when we drill down to examine the data generated from the paint mixing machine we see it did report overheating problems between 2. We now need to talk to the engineers on the shop floor to find out why this was not detected and get back to the customer with a fast solution. Sport Analytics — Provide fans with real time in game statistics in order to fully engage them.

The NBA is already up and running with this, and many other sports bodies and teams have similar platforms. These include the following: SAP manages the entire solution. Customers just provide the business users! There are also many ready built applications from SAP and partners that are powered by SAP HANA and are available in the cloud and can be used standalone or integrated with existing applications.

You can develop Java applications just like for any application server. You can also easily run your existing Java applications on the platform. It is not public and is for dedicated customers and their applications. You can consider HEC as an extension to a corporate network.

So, customers pay for what they need and do not have to worry about procuring expensive hardware, software and skills to run their SAP HANA powered applications.

Just bring your business users and any devices. Choose the correct answers. Which are true statements? Learning Assessment 4. Choose the correct answer. True or false? Learning Assessment 8. Learning Assessment - Answers 4. Learning Assessment - Answers 8. SAP HANA uses a row and column store database and the physical storage can be either in-memory, on disk, or a combination of both. There are a large number of engines available. The Application Function Library AFL is a repository of ready made common business functions and predictive algorithms that developers can use in their applications.

EIM is optional and is only installed if required. The recent addition of EIM means that customers no longer need to install and use these additional components for loading. Customers simplify their landscapes by using the built-in EIM capabilities. SDA enables the management of data at different temperatures. SAP NetWeaver is still required to provide the business layer, the flow logic and the connectivity and orchestration with other applications. Of course, data has to be acquired and you may use the built-in EIM components or external data provisioning tools as mentioned earlier, in addition to remote sources.

This component is optionally used to support light, web based applications where a full application server and all its capabilities would be overkill. XS provides all the application services you need to access the required data from with SAP HANA's database, call the data processing engines and also the application logic. XS has a built-in web server so applications are easily web enabled. Javascript is the application language used with XS.

SAP HANA comes with all the development and testing tools required to build, deploy, and manage complete applications. Evolution of the XS Engine Figure This new version is called XS Advanced and provides even more application services, employs open standards, and is capable of supporting larger and more complex applications written in many more languages.

Classic XS is tied to the database server and so it was not possible to scale up the XS component separately. With XS Advanced it is possible to scale only that component, so more power can be given to the application processor and the database remains unaffected.

All new development objects are now created in the new XS Advanced architecture. XS Classic does not use Cloud Foundry, so customers with XS classic do not have the resources to develop a single application for use in the cloud and also on-premise.

XS Advanced also uses Cloud Foundry architecture and so applications can be written once and deployed either on-premise or in cloud with no redevelopment. This means applications are divided up into small chunks to allow the developer to choose the development language. It also means that it is possible to configure each part of application to consume more or less resources as needed. This is known as elastic computing.

XS Advanced is built on a micro services architecture. For many people, it is the only interface they need. It is installed locally and is based on Eclipse and is developed in Java. See separate lesson later for details. The host and instance this pair of details identifies the exact target system 2.

You can optionally give each connection a description so it is easy to identify to purpose of each system when the list of connections becomes long. It is possible to export the list of connections to a file so these can be imported by others so they do not have to manually define the connections.

Of course the user credentials are not saved. You can also use the exported list of connections and share them as a central store. Each user creates a link to this central store and does not need to either create their own connections or import connections.

This means all connection information is managed centrally so any changes are made in just one place. Perspectives are predefined User Interface Ul layouts that contain several views. A view is a pane of varying sizes within a perspective that provides specific information, such as a W here U se d list.

Each view can be moved around via drag and drop. You can also customize a perspective by adding or removing views. Views can appear in multiple perspectives, for example, the S y s te m view is used in most perspectives as it presents a hierarchical list of objects in each SAP HANA system that is useful to everyone. Save Perspective As Fast 9 Catalog Reset Perspective It is possible to have several perspective open at the same time, and to switch from one perspective to another.

To do so, in the perspective switcher in the upper-right corner of the screen, choose the perspective you want to open. Adding a View to a Perspective Figure Eclipse is an industry standard open source software product and comes with many ready made views. For this reason, you will see a lot of views in the S h o w V iew dialog box. This includes the S y s te m s view. To customize a view, choose the V iew M e n u button, and choose C u s to m iz e View Resetting a Perspective Any perspective can be reset to its default layout in order to restore the default views in their original positions and sizes.

For example: This user must be active. The landscape XML file does not contain a password. You will have to specify the user and password for any system added to the S y s te m s view. The Systems View The S y s te m s view lists all the systems that have been registered manually, or by a landscape import. For each system, the content is organized as follows: All these objects are organized into schemas. Schemas are used to categorize a database content according to customer defined groupings that have a particular meaning for users.

Schemas also help to define access rights to the database objects. From a modeling standpoint, schemas can help to identify which tables to use when defining information models.

But a model can incorporate tables from multiple schemas. Schemas do not limit your modeling capabilities. All the information models that will be created in the modeler will result in database views. By default, all the systems that are listed in the S y s te m s view appear in the S y s te m M o n ito r view.

You get the most important information about system status, alerts, as well as disk space, memory and CPU usage. You can customize this view by adding or removing columns. Alternatively, you can right-click in the S y s te m M o n ito r v ie w and choose C o n fig u re Table. If you want to filter the list of systems that are shown in the view, right-click in the S y s te m M o n ito r view and choose S y s te m F ilte r. In this view, you can do the following tasks: M o d e le r The Q u ic k V iew is a practical entry point, dedicated to the modeler perspective.

From this view, you can create, manage and transport information models packages, views , define or execute data provisioning and define schema mapping, and so on. You can define your favorite actions for example, E x p o rt, Im p o rt, and V alida te , and display only a custom list of these favorites.

Figure You actually select both a System and a User logged on to the System. If you are logged on to the same SAP HANA system with two or more different users, the action will be authorized based on the privileges of the user you have selected. If you have closed the Q u ic k V iew and want to reopen it, you can do one of the following actions: Working with Interfaces for Administrators and Developers Note: The Q u ic k V iew only displays within the M o d e le r perspective. The information views, along with other modeling objects such as analytic privileges or procedures, are organized in packages.

Each package is a repository that you can assign to a delivery unit in order to transport the objects it contains. What would you find in a package? This is used by application developers.

For example, here is where you would create Javascript and HTML that will be used in your applications. There are plenty of tools to support the developer including trace, debug, code prompts, check-in, and check-out. You will perform the following tasks: If you are prompted to choose a folder to store settings, use the default location and choose S u b m it. If you are prompted to choose a workspace folder, leave the defaults unchanged and choose OK.

If you are asked to create a password hint in case you forget your password , choose No. Field Value Host name wdflbmt Enter your credentials as given in the following table: Explore the S y s te m s view by expanding the nodes. In the Catalog node, the System has automatically created a schema for you. The schema name is the same as your user name and is the default schema whenever you work with database objects such as tables.

If you want to work with tables and other database objects, choose C a talog. Most traditional enterprise relational databases are row based because this is regarded as the optimal design for a transactional system. Both table storage types are needed in a system that handles both transactional and analytical applications in the same database.

Column and Row Store The figure, Column and Row Store, shows that the key difference between row and column store is the way the same data is organized. Column store tables are efficient for analytical applications where requests for sets of data are not predictable.

Usually only limited columns are required. With column store, only the required columns are loaded to memory so we avoid using up memory with columns that will never be used. Also the data is arranged efficiently with all values of a column appearing one after another.

This continuous sequencing of the column values is preferred by the CPU which is able to scan the values efficiently without having to skip over values. A few more positive aspects of column store: This helps to reduce the complexity by avoiding the need to constantly create drop and rebuilding indexes. It is easy to alter column store tables without dropping and reloading data. Column store tables are optimal for parallel processing with each core able to work on a different column.

The downside to column store is the cost of reconstructing complete records from the individual column store if all columns are required by the application.

This is the case when the application is transaction based and so all fields are usually needed for a record update and must all be retrieved.

This would be possible with column store but would be slower than if the storage was row based where all the columns are always held together and can be read quickly. Row storage is still needed to support transaction processing where all columns need to be retrieved. Often an application is both transactional and also analytical. In this case you must decide which is the best storage method to use. You cannot have a table that is both row and column storage.

It is easy to convert a table from row to column and vice versa, and you do not lose the data when doing this. Compression is most impressive when there is a lot of repetition in the data values. For example, a huge sales order table where the customer type A, B or C is stored on each customer order. In this case the customer type would appear a huge number of times in the column.

Compression strips out the repetition and stores only each unique value once in a dictionary store. SAP HANA then uses integers to represent the business values in the original store as this takes up far less space and is also very efficient for scanning.

SAP HANA links the dictionary entries to the actual table using special reference stores that identify the position of where the original value was and its corresponding business value from the dictionary store. The processing happens invisibly. With the new hardware architecture, especially utilizing the new multi-core processors we can ensure instant responses by spreading out the processing task across the cores.

Parallel Processing SAP HANA automatically spreads the workload across all cores and ensures all parts of the hardware are contributing to the throughput. SAP HANA is scalable, which means you can easily add more processors as required in order to increase the parallelization and therefore the speed of processing.

Column store tables are automatically processed in parallel. Each column can be processed by one core. For column store, tables you can define partitions on each column. This means that only the required partitions are read to memory. For example, if a query requested only current year data, then all other years in the column would be ignored.

Partitions can be created based on known popular business values or by simply allowing SAP HANA to split up large columns in an arbitrary way. Since SPS10 this has increased dramatically to 16, partitions per column table. Data temperatures Figure It is not a separate component. There are two reasons we need the disk layer: To provide an area to unload less important data when memory is full. We call this inactive data. To enable data recovery if the power fails. We will cover reason 2 later when we discuss high availability.

For now let's focus on reason 1. However, most organizations will size their SAP HANA system with only enough memory to hold the core data and will utilize disk to store the remaining data. This means that there will be competition with the data for memory. When memory is full the data that is used less often is automatically moved to disk to make way for new data. The larger the memory, the less displacement is needed.

Remember also that some space is needed in memory as a working space for calculations. An organization usually values their recent data higher than older data, and often find themselves accessing the recent data more frequently than the older data. Conceptually, data can be classified into temperatures. For data that is accessed frequently, we call this hot data. Data that is accessed less frequently is called warm data. Data that is rarely accessed often retained only for legal purposes is called cold data.

For now, we will focus on hot and warm data. Quite simply, any data that is accessed by any application always comes from memory. So this means that if the table is sitting in the persistent layer, the moment it is needed, the table is then automatically loaded to memory. Column tables can be partitioned and SAP HANA is smart enough to know only to load the required columns and partitions to memory and leave the unwanted columns and partitions in the persistent layer.

Delta Merge Updating and inserting data into a compressed and sorted column store table is a costly activity. This is because each column has to be uncompressed, the new records are inserted and then recompressed again, and thus the whole table is reorganized each time. For this reason, SAP has separated these tables into a Main Store read-optimized, sorted columns and Delta Store write-optimized, non sorted columns or rows. There is a regular automated database activity that merges the delta stores into the main store.

This activity is called Delta Merge. Queries always run against both main and delta storage simultaneously. The main storage is the largest one, but because its data is compressed and sorted, it is also the fastest one.

Delta storage is very fast for insert, but much slower for read queries, and therefore kept relatively small by running the delta merge frequently. The delta merge can be triggered based on conditions that you can set.

If this is true then the delta merge is triggered.

Delta merge can also be triggered by an application. Staying on top of the delta merge is critical to maintaining good performance of SAP HANA and the administrator is responsible for this.

Refer to training course HA to learn more about delta merge. Multi Tenancy With multi-tenancy there is a strong separation of business data and also users who must be kept apart.

Each tenant has its own isolated database. Business users would have no idea that they are sharing a system with others running different applications. The system layer is used to manage the system-wide settings and cross-tenant operations such as backups. The benefit of a multi-tenancy platform is that we can host multiple applications on one single SAP HANA infrastructure and share common resources in order to simplify and reduce costs.

Multi tenancy is the basis for cost-efficient cloud computing. You can skip this step if you have already logged on. Locate the table M A R A by using a filter on the ta b le s node. Open the definition of table M ARA and identify whether the table is row or column store. Identify the key columns of table M AR A. Identify the number of records loaded to the table and also the storage used by the main and delta areas. Preview the data of table M A R A. Use one of the two following options: Why is there no delta storage value for this table and why are there no partitions available?

There is no delta storage value for this table because this is a row table and delta storage is only relevant for column tables. There are no partitions available because this is a row table and only column tables have partitions.

The table list is now filtered and displays the table M ARA in the filtered list. Notice the icon for the table represents column store as this is a column store table. This screen shows the table structure with all columns, their data types, and length. This information is provided in the top-right corner of the screen. Identify the key columns of table M A R A.

Preview the data of table M AR A. This is called a savepoint.

An pdf hana sap introduction

The frequency of savepoints is configurable and really depends on how frequently the database changes due to updates, inserts, and deletes. But savepoints take place every few minutes, what happens if the power goes off after we have added some new records and we did not yet reach the next savepoint?

Do we lose this data? Between savepoints, every committed transaction is also saved to a log area. This log area is often based on flash memory SSD to ensure ultra fast access. So we capture every update to the database. This ensures the system is exactly where it was when we lost the power. This all happens automatically in the background. We call this scale-out. Scale out is often used to spread the processing load across multiple servers in order to improve performance.

Scale-out is also used to provide redundant servers that are on stand-by in case active servers fail. If a server fails, SAP HANA can automatically swap out to a standby server in order to ensure downtime is minimized or even eliminated.

A standby server can be on warm standby which means that it is in a near-ready state and does not need to be started from cold. Standby servers can also be on hot standby. In this case, the standby server continuously replays the database log so that the databases are always in sync and ready to go.

In this case there is almost no downtime when switching to the standby server.

HA100 - SAP HANA Introduction(Col99)

This approach would be necessary for a mission critical operation where down-time would be harmful to the business. SAP HANA simply uses the savepoints and logs, described earlier, to bring the standby server up to date with the very latest data.

Authorization — for each role or user, grant and revoke access to business data, database objects, system actions, development objects, projects and more. Encryption — encryption services to ensure your data is stored securely and also allow you to set up encryption for secure communication between SAP HANA components.

Open any perspective where the view is available and look for the node S e c u rity. S y s te m s From this node, open the S e c u rity C o n sole to view the system settings and policies. Locate the user or role node and double click to manage authorizations. Where is SDI used?

What is XS? Learning Assessment 5. True False 7. What is a perspective? Learning Assessment What are advantages of column store tables? A Data footprint is automatically reduced through compression B Only the column required are actually loaded to memory C Columns can be partitioned D Aggregates can be created Row store tables are more efficient when there is lots of repeating data values in columns D e te rm in e w h e th e r th is s ta te m e n t is tru e o r false.

True False To maintain good read performance in a constantly changing database, which two components are used?

Why do we still need a persistent layer? What are the two storage components used to restore the database in case of power failure? What is scale out? Learning Assessment - Answers B To hold the delta store for newly arrived records. D To store data that is frequently used. B Use of remote servers to store archived data that is rarely used.

C Use of commodity servers that are used in high volume steaming applications. Create Calculation View — Dimension 93 Exercise 4: The role of the database in a traditional application is to provide data. The application sends down S e le c t statements to individual tables in the database, and often, many tables are involved.

The raw data is sent from the database to the application. The application then begins to process the data by combining it, aggregating it, and performing calculations. It may be possible for the database to take on some of these basic tasks but largely, the database is asked to do nothing more complex than supply raw data to the application, which does all the hard work. Therefore, we can find ourselves moving a lot of raw data between the database and the application.

We move data to the processing layer, making the application code complex. It has to deal with the data processing tasks as well as manage all of the process flow control, business logic, User Interface Ul operations, integrating data from multiple sources, and so on. Modeling in the Database With SAP HANA, we can build a modeling layer on top of the database tables, to provide data to the application in a ready-to-go, processed form.

This is efficient in the following ways: Developers find themselves continually creating the same code to process data. When dealing with highly normalized database models, such as those used with SAP Business Suite, there can be many individual tables that need to be called and combined with joins. These joins can often be pushed down to most databases. This means the applications can pass variables down to the view, for example, a response to a filter value from a business user.

Many of the views can also call procedures that have input parameters. Information Views can consume other Information Views. This encourages a high degree of modularization and of reuse. These include textual, spatial, and predictive functions. Although the attribute and analytic views have been available since the first release of SAP HANA, they have become less important as newer releases of SAP HANA delivered more powerful calculation views that slowly took over all the functionality of the other two views.

Since SPS12, attribute and analytic views should be avoided. This means modeling is simpler with only one type of view to consider. Migration tools are available so that customers can easily convert the attribute and analytic views to calculation views. We will cover attribute and analytic views later to ensure you develop some basic skills and awareness of these. For now, let's focus on the current recommendation by SAP, which is to always model with only the calculation views.

Choosing the Correct Type of Information View By selecting various combination of settings, you can define three basic behaviors of a calculation view: Dimension 2. Cube without star schema 3. Cube with star schema Modeling a Dimension Let's start with a dimension as this is the most likely to be created first. The purpose of a dimension type of calculation view is define a list of related attributes such as material, material color, weight, and price.

This list can be directly consumed by an application using SQL although it is most likely to be found as a component in another calculation view of the type CUBE when creating star schemas. Dimension type calculation views do not contain measures, they only contain attributes.

This means that without measures, aggregation is not possible. Reporting tools cannot directly access calculation views of type dimension.

Only SQL access is allowed. It might be helpful to think of calculation views of type dimension as master data views. You would not model transaction data using dimension calculation views as no measures can be defined, and measures are for modeling with transactional data.

Be careful not to confuse measures with attributes that are of a numerical data type such as integer or decimal. A numeric field can be included in this dimension calculation view but it cannot be modeled as a measure and must be modeled only as an attribute. This means there is no aggregation behavior possible, for example you could include weight but you cannot sum this, the output will appear as a list of all weights.

Modeling Dimensions You then proceed to define the source tables, the joins, the filters and columns that are to be exposed. It is also possible to define additional derived attributes, for example, a new column to generate a weight category based on a ranges of weights using an i f expression.

Finally, you are able to rename any columns to be more meaningful for the calling application. Remember, the column names originate from the source tables, and these names can be user unfriendly. Modeling a Cube - Data category: Modeling a Cube Now let's move on to the next type of calculation view.

This is of the type Cube and is used to define a data set made up of attributes and measures that can be used in a flexible slide and dice format. This is not a star schema as there are no dimensions defined we will cover that in a moment but simply a data set based on one or more transaction tables that can be queried using any combination of attributes and measures to create an aggregated data set.

Reporting tools can directly access this type of calculation view as well as access via SQL. Do not set the Star Join flag. This will be used later in the third and final calculation view type. You will then select the table, or tables that are to be included in the model. Typically you choose a transaction table so that you have columns from which you can define attributes and measures. It is possible to include more than one table, for example, you may need to include a header and a line item table to form the complete picture of a sales transaction.

In this case you simply join the tables using a JOIN node. Now select the columns from the tables that are to be exposed, you can optionally set filters and define additional calculated columns.

The last step is to rename any columns to provide meaningful names to the user. Modeling a Star Schema Now comes the final type of calculation view type: The key reason for adding the DIMENSION views is that you are then able to request aggregations of any measures in the fact table by any combinations of attributes, not just those attributes from the fact table, but also attributes from any dimensions.

This increases the analysis possibilities significantly. It can include attributes and measures. It is used to present aggregated views of the data set in the most efficient way. Select the transaction tables and create joins to combine the transaction tables. Then choose the columns to expose and set any filters and create any calculated columns.

What you are doing up to this point is to form a fact table that will be used as the hub of the star schema. Creating Information Models The last step is to improve the names of any columns by using the rename function in the semantic node. There are some limitations when using the Web Workbench. For example, only calculation views — of any type — can be created and maintained in the Web Workbench whereas attribute, analytical and as well as calculation views can be maintained in the SAP HANA Studio.

As calculation views are the most important of all the views, and, in fact, may be the only type of view you will ever create, then working with the Web Workbench would be absolutely fine. You access this view from the Help menu option. However, you cannot work with attribute or analytic views.

Creating Information Models Studio. Calculation views created with one interface can be accessed with the other. These views will be joined later to the Sales fact data in another calculation view. Database Schema: Training Column Tables: In this exercise, when values include , replace these characters with your own student number. Create a package called s t u d e n t.

Add all fields to the output. Connect the Projection nodes. Add all columns to the output of the final Projection node. Save and activate the new view, then check the Job Log view to view the status of the activation job.

An pdf introduction hana sap

Connect the Join node to the Projection node. Leave all other entries with default values and press OK. You should now see two Projection nodes, one on top of the other. The Status should read Completed successfully. You should see three records. You should see eight records. You will now combine the two views you created earlier that represent the dimensions with a sales transaction table to create a star schema that can be used later for multi-dimensional analysis.

Reader-friendly serif font Linotype Syntax 9. One-column layout. E-book in full color. Copy and paste, bookmarks, and print-out permitted. Table of contents, in-text references, and index fully linked. Including online book edition in dedicated reader application. In this book, you'll learn about: Highlights Include: In-memory computing.

Reading Sample. Table of Contents. E-book Print edition Bundle. With step-by-step instructions and sample coding, this book will teach you how to build and … More about the book. Complete ABAP.