MENU

Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Sunday, January 31, 2016

How to Check HANA Database Size



Here , I explained how to check hana database size by using hdbsql & hdbcons utility.

First Login to HANA Database using hdbsql then run the below queries.

1- The number of used blocks in the data volumes

hdbsql SID=> select sum(allocated_page_size) from m_converter_statistics

SUM(ALLOCATED_PAGE_SIZE)
3497635840

2- The license relevant HANA memory usage

select product_usage from m_license

Note : The first metric is useful to compare HANA with other RDBMS, e.g. what is the compression ratio compared to ordinary row-store databases. The second metric is important e.g. for sizing the hardware.

Checking Database Size Using hdbcons Utility -

xxx200132:HDB:sidadm xyz/usr/sap/SID/HDB02/vadbabc> hdbcons 'dvol info'

SAP HANA DB Management Client Console (type '\?' to get help for client commands)
Try to open connection to server process 'hdbindexserver' on system 'ABC', instance '02'
SAP HANA DB Management Server Console (type 'help' to get help for server commands)
Executable: hdbindexserver (PID: 9104)
[OK]
--
DataVolume #0 (xyz/hdb/SID/data/mnt00004/hdb00002/)
  size= 16760078336
  used= 3464384512

NOTE: We recommend shrinking this DataVolume. Use 'dvol shrink -i 0 -o -v'.
      Please use a percentage higher than 110% to avoid long runtime and performance decrease!

[OK]
--
[EXIT]
--
[BYE]

How to query from HANA Database



Here , I explained about how to query from HANA database by using hdbsql.

Executed the statement via hdbsql -

SID:xyz/usr/sap/SID/HDB02>   hdbsql -n L14 -i 02 -u HANADBXY -p pwd -a

Welcome to the SAP HANA Database interactive terminal.

hdbsql=> \o test.out      – specfying the output file, make sure the output file is touched
hdbsql=>  \i test.sql      – specfying the input file, the sql file created before

1 row selected (overall time 15.39 msec; server time 740 usec)

1 row selected (overall time 14.15 msec; server time 4.203 msec)


hdbsql SID=> \o       – closing the output file
hdbsql SID=> \q       – quitting from hdbsql

SID:xyz/usr/sap/SID/HDB02>  more test.out

"/ABC/ABC14",11909
"/ABC/ABC15",6999691
"/ABC/ABC12A",29544151

SID:xyz/usr/sap/SID/HDB02>

Wednesday, January 27, 2016

Keep Shared Memory Over Restart in HANA



Here, I explained about parameter keep_shared_memory_over_restart which is related to shared memory and how it will work in the time of HANA database startup.

Parameter Name       - keep_shared_memory_over_restart
Default Value           - true
About Parameter      - Is set to true or false, if true the row store data will stay in shared memory.

How to Set Parameter - indexserver.ini -> [row_engine] -> keep_shared_memory_over_restart

If set to true, SAP HANA will keep the row store in memory when the technical preconditions are fulfilled. If set to false, the row store is generally recreated from scratch during a 
startup.

Note : If you run the following on the OS level ipcs -m to view the shared memory you will see the shared memory will have segments set to DEST status and do not go away, even after the HANA services are stopped (Soft shutdown). The reason for this is because the row store data is still in shared memory so that when your HANA services are restarted it won't have to reload the data.


Related Topic - Shared Memory Parameter in HANA

Tuesday, January 26, 2016

Mergedog in HANA



Mergedog is a process which will trigger auto merge based on formula.

FAQ - How Check configuration of mergedog in ini-files ?

In the Configuration tab of SAP HANA Studio filter for 'mergedog'. Expand the section of indexserver.ini and check if parameters are configured as default:

Default Value        -  active = Yes
Default Value        -  check_intervall = 60000 (corresponding to one trigger of mergedog per minute)

To correct the parameter value, double-click on the parameter name and choose Restore Default. This will delete all custom values on system and host level and restore the default value system-wide.

Note: All other parameters in mergedog configuration should only be changed based on recommendations from SAP Support after analyzing your specific case. Typically such a change is not required.

Related Topic - Delta Merge in SAP HANA
Related Topic - Delta Merge Issue in SAP HANA
Related Topic - Auto Merge threshold formula in HANA

Auto Merge threshold formula in HANA



Here is the threshold formula of auto merge and this formula is configured in auto_merge_decision_fun under indexserver.ini.

(((DMS>PAL/2000 or DCC>100) and DRC > MRC/100) or (DMR>0.2*MRC and DMR > 0.001)) and (DUC<0 .1="" 0.05="" or="">=DUC)

DMS : Delta memory size [MB]
PAL : Process allocation limit [MB]
DCC : Delta cell count [million] This refers to the current number of cells in the delta storage of the table.
DRC : Delta row count [million] This refers to the current number of rows in the delta storage of the table.
MRC : Main row count [million] This refers to the current number of rows in the main storage of the table.
DMR : Deleted main rows [million] This refers to the number of deleted records not in delta storage, but marked as deleted in main storage. Merging makes sense if there are many deleted rows.
DUC : Delta uncommitted row count [million] This refers to the number of uncommitted rows in the delta storage of the table.

How to Set Parameter -  indexserver.ini -> auto_merge_decision_fun

Note :  this can find in SAP HANA Administration guide.

If you can match this formular from result of selecting BGRFC_UNIT_TIME from M_CS_TABLES, you will get the idea of auto merge threshold.

Example: 

RECORD_COUNT : 7,039
RAW_RECORD_COUNT_IN_DELTA : 30,980
In this case, the contidion of (DMR>0.2*MRC and DMR > 0.001) dose not meet the requirement.
((8,489 - 7,039)>0.2*8,489 and (8,489 - 7,039) > 0.001)

This is why in this case auto merge will not trigger.

Related Topic - Delta Merge in SAP HANA
Related Topic - Delta Merge Issue in SAP HANA
Related Topic - Mergedog in SAP HANA

Preload Column Tables Parameter in HANA



The following SAP HANA parameters control column loads during SAP HANA startup and on the secondary system of a system replication scenario based on columns loaded into memory before the shutdown.

Parameter Name       -  preload_column_tables
Default Value             -  true

About Parameter      - 

Per default SAP HANA loads the columns into the memory of the secondary system of a system replication scenario during normal uptime. This has the advantage that a reload is not required at failover time. If you want to disable this feature (e.g. because only limited memory is available on the secondary side), you can set the preload_column_tables parameter to ‘false’.

The effect of this parameter depends on the system where it is set.

Primary system: Information about loaded tables is collected and persisted in the system replication related snapshot.
Secondary system: The load information from primary is evaluated and the tables are loaded accordingly.

How to Set Parameter -  global.ini -> [system_replication] -> preload_column_tables


Additional Command - we can check for tables currently part of this reload information using the following hdbcons command.

tablepreload c -f

Reload Preloaded Tables Parameter in HANA



The following SAP HANA parameters control column loads during SAP HANA startup and on the secondary system of a system replication scenario based on columns loaded into memory before the shutdown.

Parameter Name       - reload_tables
Default Value           - true

About Parameter      - 

If set to ‘true’, SAP HANA loads columns into memory during startup, which were located in memory before shutdown. This can be considered as pre-warming in order to make sure that column loads are not required when the table is accessed the first time explicitly.

How to Set Parameter -  indexserver.ini -> [sql] -> reload_tables


Parameter Name       -  tables_preloaded_in_parallel
Default Value            -  5

About Parameter      - 

Number of tables loaded in parallel after startup .

A higher value typically results in quicker reloads, but a higher CPU consumption, so it is a trade-off between load time and resource consumption. If you want to adjust it, you should perform some tests to find an optimal value to fulfil your needs.

How to Set Parameter -  indexserver.ini -> [parallel] -> tables_preloaded_in_parallel

Tablepreload Write Interval Parameter in HANA



How can reload information for startup and system replication be collected? The following parameter is used to control the collection of reload information:

Parameter Name       -  tablepreload_write_interval

Default Value           -  3600 (HANA Revision 69 and below)
Default Value           -   86400 (HANA Revision 70 and above)

About Parameter      - 

This parameter defines the frequency of collecting table load information for reloads during startup and on the secondary system replication side.Collection of the data is disabled by setting the value to 0.

How to Set Parameter -   global.ini -> [persistence] -> tablepreload_write_interval

Additional Command   - In exceptional cases you can manually execute the following hdbcons command in order to collect the current load state.

tablepreload writeinfo

Reference From    -  SAP Note 1889081 for more information.

Unused Retention Period Parameter in HANA



Configuring a retention for unloads typically provides no advantage and increases the risk of unnecessary unloads and loads. Therefore these parameters should only be set in exceptional situations.

Automatic unloads when a column exceeds the configured unused retention period:

Parameter Name      -  unused_retention_period

Default Value           -  0 (disabled)
Unit for Parameter  - s
About Parameter     -  Number of seconds after which an unused object can be unloaded

How to Set Parameter -  global.ini -> [memoryobjects] -> unused_retention_period


Parameter Name      - unused_retention_period_check_interval

Default Value          -  7200
Unit for Parameter  -  s
About Parameter     -  Check frequency for objects exceeding the retention time

How to Set Parameter -  global.ini -> [memoryobjects] -> unused_retention_period_check_interval

Unload Lower Bound Parameter in HANA



SAP HANA automatically performs unloads when memory becomes scarce. This is typically critical for performance and should be avoided whenever possible. The following SAP HANA parameter globally controls when unloads are performed in case of a high memory utilization:

Parameter Name - unload_lower_bound

Default Value - 0 (default algorithm)

Unit for Parameter - byte

About Parameter -

This parameter can be used to define the minimum size of swappable SAP HANA objects that should remain in memory. Higher values result in less and later unloads. Normally the default value provides a reasonable unload behaviour and so it should only be adjusted in specific cases.

How to Set Parameter - indexserver.ini -> [memoryobjects] -> unload_lower_bound

Reference from : SAP Note 1993128 for more information.

Convert Row Store to Column Store Table



we are going discuss , about row store to column store conversion and column store to row store conversion in HANA database at table Level.

Row Store to Column Store Conversion :

Its possible with below SQL command to convert row table to column table.

ALTER TABLE {Schema_Name}.{Table_Name} COLUMN

Column Store to Row Store Conversion :

Replace COLUMN with ROW to convert from column to row table .

ALTER TABLE {Schema_Name}.{Table_Name} ROW

Example:

Alter table "HANADBXY"."LOCK_ABC_LOCK" row 

Topic on - Row Store Table in SAP HANA
Topic on - Column Store Table in SAP HANA

Monday, January 25, 2016

Delta Merge in HANA



The purpose of the delta merge operation is to move the data from WRITE optimized DELTA memory to READ optimized and Compressed MAIN memory or in simple terms, move the changes collected in the delta storage to the read-optimized main storage.

Delta Merge Operation can be performed by 3 ways.

1> Delta Merge can be done automatically by HANA using Smart Merge technology.
2> Delta Merge can be done manually by using MERGE DETLA of  SQL statement.
3> Delta Merge can be done by using HANA studio.

                                                                               - Merge Process Diagram-

Prerequisites : You should have one of the following privileges:

1> System privilege TABLE ADMIN.
2> SQL object should have UPDATE privilege for the table or the schema in which the table is located.

MERGE DELTA  : The MERGE DELTA statement merges the column store table delta storage to the tables main storage. 

SQL statement for MERGE DELTA :

1> MERGE DELTA OF  '(table_name)'  (hard merge)
2> MERGE DELTA OF '(table_name)' WITH PARAMETERS ('FORCED_MERGE' = 'ON') (forced merge)
3> MERGE DELTA OF '(table_name)' WITH PARAMETERS ('MEMORY_MERGE' = 'ON') (memory-only merge)

Manual Delta Merge Operation : You can trigger the delta merge operation for a column table manually by using merge delta operation or from HANA studio.It may be necessary to trigger a merge operation manually , for example:

1> An alert has been issued because a table is exceeding the threshold for the maximum size of delta storage.
2> You need to free up memory. Executing a delta merge operation on tables with large delta storages is one strategy for freeing up memory. 
3> The delta storage does not compress data well and it may hold old versions of records that are no longer required for consistent reads. 
4> In order to optimize query execution performance of the system and to ensure optimum compression, the system needs to transfer the data from the delta part into the main part. 

SMART MERGE Option : When the SMART_MERGE is ON the the HANA database does a smart merge based on merge criteria specified in mergedog section of the indexserver configuration.

Adding a column store table ABC in SMART_MERGE option Using below SQL.

MERGE DELTA OF A WITH PARAMETERS('SMART_MERGE' = 'ON');

Note : Even though the delta merge operation moves data from the delta storage to the main storage but the size of the delta storage will not be zero because of while the 
delta merge operation was taking place, records written by open transactions were moved to the new delta storage. 

Delta Merge Monitoring :

Delta Merge Operation Monitoring can be done by HANA Studio :

The delta merge operation can take a long time so we can monitor the progress of currently running delta merge operations in the HANA Studio Administration.

Performance ---- Job Progress.

Merge History -  merge history can be check by opening the Merge Statistics table on the System Information tab and The SUCCESS column indicates whether or not the merge operation was executed.

Delta Merge Operation Monitoring by using SQL Statement:

we can check the values in M_CS_COLUMNS view for the memory_size_in_total, memory_size_in_main , memory_size_in_delta before and after the  delta merge operation to see the difference.

SQL statement to retrieve the top 100 largest delta storages in memory: 

SELECT TOP 100 * from M_CS_TABLES ORDER BY MEMORY_SIZE_IN_DELTA DESC.


FAQ :  what is Delta Log in Delta Merge ?

The column store creates, its logical redolog entries for all operations executed on the delta storage. This log known as the delta log. The delta merge operation truncates the delta log (ie redo operations).

we can adjust the logging settings for single column tables using below SQL .

ALTER TABLE TABLE_NAME {ENABLE | DISABLE}  DELTA LOG;

Note : After enabling, we have to perform a savepoint to be certain that all data is persisted. Also we should perform the data backup because it will not be possible to recover this data.

If logging is disabled, log entries will not be persisted for logging disabled table .the Changes to this table will only be written to the data store and when a savepoint is carried out. This can cause loss of committed transaction should the indexserver terminate. In the case of a termination, you should have to truncate this table and insert all data again.

For a column store table, the logging setting can be seen in the table :
public.m_cs_tables.IS_LOG_DELTA

Related Topic - Delta Merge Issue in SAP HANA
Related Topic - Auto Merge threshold formula in HANA
Related Topic - Mergedog in SAP HANA

Saturday, January 23, 2016

Loads and Unloads in SAP HANA



SAP HANA database keep all relevant data in memory and the Standard row tables are loaded into memory when the HANA database is started and remain will be there as long as HANA database is running.

Column tables will be load in memory on demand, column by column when they are first accessed. This is called lazy loading and This means that columns that are never used are not loaded and memory waste is avoided.This is the default behaviour of column tables in SAP HANA .

In the metadata of the table, it is possible to specify that individual columns or the entire table are loaded into memory when the database is started and The database may unload tables or individual columns from memory.

we can also configure columns like allow access to the main storage one page at a time instead of requiring the whole column to be in memory . This will save the memory and the query for a single value in the main storage when certain individual columns or the entire table reside on disk. for enabling this feature,we need to specify column description clauses  PAGE LOADABLE or COLUMN LOADABLE in the (column_desc) of a CREATE TABLE or ALTER TABLE statement.

Prerequisites : following privileges required for loads and unload operation.

1> System privilege TABLE ADMIN
2> SQL object privilege UPDATE for the table or the schema in which the table is located

Example : if a query or other processes in the database require more memory compare to currently available then unload will start based on a least recently used algorithm.

SQL for load table ABC into memory.

 LOAD ABC all;

SQL for Unload table ABC from memory.

 UNLOAD ABC;

SQL for checking the load status of table ABC.

select loaded from m_cs_tables where table_name = 'ABC'

Note :  If you loaded a table then the complete data of the table with including the data in its delta storage, is loaded into main memory. Depending on the size of the table and this may take time and the load status of table's  is  FULL.

If you unloaded a table then the complete data of the table with  including the data in its delta storage, is unloaded from main memory. Subsequent access to this table will be slower as the data has to be reloaded into memory and the load status of table's  is NO.


A table can have one of the following load status:

Unloaded : None of the data in the table is loaded to main memory.

Partly loaded : Some of the data in the table is loaded to main memory, for example, a few columns recently used in a query data that is in the delta storage can only be fully loaded or unloaded.

Fully loaded : All data of the column store is loaded into main memory.

SQL for checking Unloads Detail:

the following query will provide the number of unloads during a particular one-hour time-slot:

select count(*) from M_CS_UNLOADS where UNLOAD_TIME between '19.08.2015 09:00:00' and '19.08.2015 10:00:00' 

FAQ: In which sequence are columns unloaded?

Usually unloads happen based on a "least recently used" (LRU) approach, so the columns having not being used for the longest time are unloaded first. we can prioritize unloads using the UNLOAD PRIORITY setting:

ALTER TABLE "" UNLOAD PRIORITY

The priority can vary between 0 and 9. Tables with a higher priority are unloaded earlier than tables with a lower priority. SAP HANA considers both the last access time and the unload priority for the proper sequence of unloads, so both factors are important. The unload priority of a table can be checked using below SQL:

SELECT UNLOAD_PRIORITY FROM TABLES WHERE TABLE_NAME = ''


FAQ : How can unloads be monitored?

we can monitor unloads in the following way:

1> Monitoring view M_CS_UNLOADS
2> SAP HANA Studio -> Administration -> Performance -> Load -> Column Unloads

Friday, January 22, 2016

Row Store Table in HANA



Conventional databases i.e. row store are good for write operations therefore database with row store architecture is also called as write optimized system.This type of architecture is effective especially in OLTP systems.

Row storage is the conventional technique that is used in earlier databases to store the data in the tabular form. In row storage, data is inserted in form of tuple. Each tuple is nothing but a row which is unique identification of each record. There are many advantages of row store but the main or important advantage is easy to add or modify a record. So the systems where write back to database operation is required in that case row store is easier than column store.

A column oriented database (also called as C-stores) stores the content column wise rather than by row. Though this technique is totally opposite to the conventional database, it is very useful in data compression, Storing data by column results in better compression than the conventional row store. It also performs well while query processing since  queries read only the columns which are required.

SQL For Monitoring Row Store Table:

Total Memory Consumption of All Row Tables:

SELECT round(sum(USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024) AS "Row Tables MB"  FROM M_RS_TABLES;

Memory Consumption of Row-Ordered Tables in Schema :

SELECT SCHEMA_NAME, TABLE_NAME,
round((USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024, 2) AS "MB Used" FROM M_RS_TABLES WHERE schema_name = 'SYSTEM' ORDER BY "MB Used" DESC, TABLE_NAME

Advantages:

* easier to insert and update
* the data is stored together
* Row Store tables have a primary index
* Row ID maps to primary key
* Secondary indexes can be created
* Recommended when the tables contain less volume of data.
* Used when the application request has to access the entire row.
* The application needs to only process a single record at one time (many selects and/or updates of single records).  
* The application typically needs to access a complete record (or row).  
* The columns contain mainly distinct values so that the compression rate would be low.  
* Neither aggregations nor fast searching are required.  
* The table has a small number of rows (e. g. configuration tables). 

The disadvantages are :

*  all data in a row has to be read even though the requirement may be to access data from a few columns (in analytical processing)

Topic on - Column Store Table in SAP HANA
Topic on - Convert Row Store to Column Store Table in HANA

Column Store Table in SAP HANA



HANA is a hybrid database which uses both row as well as column store techniques. Column store table stores records in a sequence of columns . A column store table is comprised of two index types, for each column a main index and a delta index. The delta storage is optimized for write operations and the main storage is optimized in terms of read performance and memory consumption.

Conventional databases i.e. row store are good for write operations therefore database with row store architecture is also called as write optimized system. This type of architecture is effective especially in OLTP systems.

On the other hand, system which are used for ad hoc querying with huge volume of data are read optimized, i.e. analytical or data warehouse systems where data is mostly used for reporting purpose. These systems represents read optimized in which bulk amount of data load is performed with ad hoc queries.  Other examples of read optimized systems are CRM (Customer Relationship Management), Electronic Library catalogues and other ad hoc inquiry systems.  In such environments, a column store architecture, in which the values for each single column are stored contiguously, should be more efficient.




Example: create history column table A.

CREATE HISTORY COLUMN TABLE A(c NVARCHAR(1000)) PARTITION BY ROUNDROBIN PARTITIONS 2;

SQL For Monitoring Column Store Table:

To get a high-level overview of the amount of memory used for Column Store tables, you can execute the following SQL statement:

SELECT ROUND(SUM(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "CS Memory (MB)",
ROUND(SUM(MEMORY_SIZE_IN_MAIN)/1024/1024) AS "CS Memory In Main (MB)",
ROUND(SUM(MEMORY_SIZE_IN_DELTA)/1024/1024) AS "CS Memory In Delta(MB)"
FROM M_CS_TABLES WHERE LOADED <> 'NO’

To get a breakdown by host, service, and schema, you can execute the following statement:

SELECT S.HOST AS "Host",
SERVICE_NAME AS "Service",
SCHEMA_NAME AS "Schema",
ROUND(SUM(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Schema CS Memory (MB)"
FROM M_CS_TABLES AS T JOIN M_SERVICES AS S ON T.HOST = S.HOST AND T.PORT =S.PORT WHERE LOADED <> 'NO'
GROUP BY S.HOST, SERVICE_NAME, SCHEMA_NAME
ORDER BY "Schema CS Memory (MB)" DESC

To get a breakdown by amount of memory consumed by a specific table. This also shows which of its columns are loaded, and the compression ratio that was accomplished. For example, list all tables for schema "HANADB":

SELECT TABLE_NAME as "Table",
round(MEMORY_SIZE_IN_TOTAL/1024/1024) as "MB Used"
from M_CS_TABLES where SCHEMA_NAME = 'SYSTEM’ order by "MB Used" desc

To get a breakdown by columns of a single table, actual size of the data, the "delta changes" and the compression ratio for each of its columns.

select COLUMN_NAME as "Column", LOADED,
round(UNCOMPRESSED_SIZE/1024/1024) as "Uncompressed MB",
round(MEMORY_SIZE_IN_MAIN/1024/1024) as "Main MB",
round(MEMORY_SIZE_IN_DELTA/1024/1024) as "Delta MB",
round(MEMORY_SIZE_IN_TOTAL/1024/1024) as "Total Used MB",
round(COMPRESSION_RATIO_IN_PERCENTAGE/100, 2) as "Compr. Ratio"
from M_CS_COLUMNS where TABLE_NAME = 'Item’

Total Memory Consumption of All Column Tables by Schema:

SELECT SCHEMA_NAME AS "Schema",
round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB"
FROM M_CS_TABLES GROUP BY SCHEMA_NAME ORDER BY "MB" DESC;

Points on Performance: ( why accessing the data is fast in column store ? )

*  Column store databases stores a dictionary of every distinct value that occurs in the column.
*  For each distinct value one needs to maintain a list that tells, for which row a specific value occurs
*  When we want to get a row/record from these values we need to go through each and every column and find the appropriate match. This is nothing but index inversing.

Advantages of column store:

- Only affected colums have to be read during the selection process of a query.
- Any of these columns can serve as an index
- Improves read functionality significantly, also improves write functionality
- Highly compressed data
- No real files, virtual files
- Optimizer and Executer – Handles queries and execution plan
- Delta data for fast write
- Asynchronous delta merge
- Consistent view Manager
- Recommended when the tables contain huge volumes of data.
- Used when lot of aggregations need to be done on the tables.
- Supports parallel processing.

The disadvantages are :

 - After selection selected rows have to be reconstructed from columns.
-  There is no easy way to insert / update.

Note: To enable fast on-the-fly aggregations, ad-hoc reporting, and to benefit from compression mechanisms it is recommended that transaction data is stored in a column-based table.The SAP HANA data-base allows joining row-based tables with column-based tables. However, it is more efficient to join tables that are located in the same row or column store. For example, master data that is frequently joined with transaction data should also be stored in column-based tables.

Topic on - Row Store Table in SAP HANA
Topic on - Convert Row Store to Column Store Table in HANA

License Key Types in SAP HANA



SAP HANA License and keys are required to use SAP HANA database. You can install or delete HANA License keys using SAP HANA studio and Like all SAP products,we need an valid license to run SAP HANA for production systems.

Types of License keys in SAP HANA :

SAP HANA system supports two types of License keys :

1> Temporary
2> Permanent -  Enforced and Unenforced

Temporary License Key − A temporary license keys is installed by default when you install the HANA database. These keys are valid only for 90 days and and upon expiration the HANA system is locked down so you should request permanent license keys from SAP market place before expiry .

Permanent License Key − SAP provides this license upon customer request ,known as permanent license. Permanent License keys are valid only till the predefine expiration date .Once expiration date period is over the license will get expired and a temporary license will be installed automatically for 28 grace period and During this period, you have to install a permanent License key again.

Permanent license keys are of two types for HANA system :

Unenforced − If unenforced license key is installed Then system will issue a warning message upon over usages of SAP HANA licensed memory amount but it will not get locked down, operation of SAP HANA is not affected in this case.

Enforced − this HANA license are based on the memory usage and If Enforced license key is installed and consumption of HANA system exceeds the license amount of memory, HANA system gets locked. If this situation occurs,
 HANA system has to be restarted or a new license key should be requested and installed.

Example : you can install a 1 TB memory HANA system but can only license it for 256 GB usage.If the system usage is increased over licensed amount then the system will locked down .

License Expire in SAP HANA

SAP HANA License Management

How to check if the installed HANA license is enforced or unenforced ?

Execute following query and if the "Enforced" column value is "False" then your license is not enforced.

SELECT EXPIRATION_DATE,ENFORCED,PERMANENT,LOCKED_DOWN FROM "SYS"."M_LICENSE";

SAP HANA License Management



This topic describes how to request SAP HANA database License from service market place and apply the same to the database. This also describes how to check the SAP HANA License properties on a database using SAP HANA Studio and using DB commands.

To check the SAP HANA License Properties by using SAP HANA Studio :

Steps: 
1> Connect to the required system via HANA Studio
2>  In SAP HANA Studio Navigator, connect to an instance of the SAP HANA Database Engine using relevant user (SYSTEM).
3> Right-click to the desired Instance and select properties.
4>  Select License, the information about license appears.
5>  A dialog box appears reflecting the status of the action.

Right Click on HANA system → Properties → License



Note : Above information can also be retrieved by executing following query.

SELECT * FROM "SYS"."M_LICENSE";

FAQ : How to de-installed License keys in SAP HANA ?

License keys can be de-installed using the following SQL statement:

UNSET SYSTEM LICENSE ALL

This command will delete all existing license keys installed so far in the database instance. Furthermore, it sets the system immediately to lockdown state. A new valid permanent license key needs to be installed in order to unlock the instance. Like installing a license key, your user needs the LICENSE ADMIN system privilege to delete the license keys.

Link for : License Expire in SAP HANA

Link for : License Key Types in SAP HANA

Thursday, January 21, 2016

Performance Issue in hana Database



Case Study:

HANA DB Version : 97

Symptoms:

We are facing performance issues in SID Hana Database , Due to performance issue memory utilization in HANA database is reaching to threshold point .

Memory at H/W level : 1 TB
Hana Database  Size : 220 GB

Then DBA Finding's :

Below Finding from Index Server Trace file:

Total allocated memory = 1046594002944b (974.71gb)
Total used memory        = 1000702303849b (931.97gb)
Sum AB                         = 1029626544128
Sum Used                       = 983734845033
Heap memory fragmentation: 4

Top allocators (component, name, size). Ordered descending by inclusive_size_in_use.

 1: System:       /             977995705511b (910.82gb)
 2: System:     Pool         910542437746b (848gb)
 3: Statement Execution & Intermediate Results: Pool/JoinEvaluator    767792139088b (715.06gb)
 4: Statement Execution & Intermediate Results: Pool/JoinEvaluator/JECreateNTuple        472033054784b (439.61gb)
 5: Statement Execution & Intermediate Results: Pool/JoinEvaluator/NTuple   295569563952b (275.27gb)

In  above Trace , it clearly states JoinEvaluator  is the Top allocator so it mean Performance Issue in hana Database due to joins in query.

Solution :


1. This is a HANA bug which have seen in  SAP HANA version 97 .

Expected Fix:

Issue has been fixed in HANA version 102.4.

Hana Database Not Starting due to Index Rebuilding




Case Study:

HANA DB Version : 96

Symptoms:

We are facing issues in ABC DB ,SAP HANA database not starting & System is currently down.

Then DBA Finding's :

Below Finding from Index Server Trace file:

If  Indexserver Having OOM & its showing  top allocators IndexRebuildAllocator  in the time of DB restart or not allowing to Start DB due to IndexRebuildAllocator  .Then follow as below.

Top allocators (ordered descending by exclusive_size_in_use).

1: Pool/IndexRebuildAllocator          79991234848b (74.49gb)

Whenever we perform database restart,  HANA triggers index rebuilding procedure of all the indexes at the same time. Rebuilding of each index requires a certain amount of memory which leads to OOM issue.

Solution / Work around:

 

We have added the following parameter which stops index rebuild in SAP Hana Database at the time of startup.

indexserver.ini

section:[row_engine]

parameter name: use_jobex_index_rebuild

parameter value:  false

Expected Fix: 

Once database is upgraded to version 97, we need to remove the above parameter.

Thursday, January 14, 2016

Top 3 alerts in SAP HANA



Top 3 alerts in HANA databases:

ALERT_ID    ALERT_NAME
58                     Plan cache size                                               
0                       Internal statistics server problem                            
3                       Inactive services                                             


My favorite is the alert 58. SAP simply cannot provide a good default size for the HANA plan cache. So every HANA DBA has to initially adapt the plan cache and reevaluate the size in case of alert 58.
The optimal size depends on your workload, you don't want to waste memory but you also don't want to impede performance either. The SAP note 1977253 clarifies all questions, for ABAP systems I calculate the recommended plan cache size directly via SQL:

select max(connections)*15*1024*1024 from (select snapshot_id,count(distinct connection_id) connections from _sys_statistics.host_connections group by snapshot_id)


For alert 0 there is no check threshold, I believe this is an error which is raised automatically once the statistics server detects some issues. Also the severity level 5 makes this alert unique. Anyway, here you'll have to dive into the trace files and search for a suitable SAP note to fix this.


The situation for alert 3 is quite similar. "Inactive services" is a very sensitive alert which is probably already known to you. Again, there is no panacea since it is just a symptom for different kind of issues.

Saturday, January 9, 2016

Alerts in SAP HANA



Purpose:

The purpose of this Guide is to provide a list of alerts that may occur within SAP HANA

Overview:

This list of alerts for SAP HANA provides a quick reference on how to troubleshoot issues in your SAP HANA Database.  The Notes are updated regularly to reflect most recent revisions.


SAP Note 1898317 How to handle HANA Alert 1: #Host physical memory usage#

SAP Note 1900643 How to handle HANA Alert 2: #Disk usage#

SAP Note 1902033 How to handle HANA Alert 3: #Inactive Services#

SAP Note 1909660 How to handle HANA Alert 4: #Restarted services#

SAP Note 1909670 How to handle HANA Alert 5: #Host CPU Usage#

SAP Note 1909641 How to handle HANA Alert 10: #Delta Merge (mergedog) Configuration#

SAP Note 1909707 How to handle HANA Alert 16: #Lock Wait Timeout Configuration#

SAP Note 1909763 How to handle HANA Alert 17: #Record count of non-partitioned column-store tables#

SAP Note 1909742 How to handle HANA Alert 19: #Record Count of Delta Storage of Column-Store Tables#

SAP Note 1910140 How to Handle Alert 20:      #Table growth of non-partitioned column-store tables#

SAP Note 1910159 How to handle HANA Alert 25: #Check number of connections#

SAP Note 1910169 How to handle HANA Alert 26: #Unassigned volumes#

SAP Note 1910188 How to handle HANA Alert 27: #Record count of column-store table partitions#

SAP Note 1977291 How to handle HANA Alert 28: #Most recent savepoint operation#

SAP Note 1977314 How to handle HANA Alert 29: #Size of delta storage of column-store tables#

SAP Note 1898317 How to handle HANA Alert 31: #License expiry#

SAP Note 1900267 How to handle HANA Alert 32: #Log mode OVERWRITE#

SAP Note 1900296 How to handle HANA Alert 33: #Log Mode LEGACY#

SAP Note 1900682 How to handle HANA Alert 34: #Unavailable Volumes#

SAP Note 1900728 How to handle HANA Alert 35: #Existence of Data Backup#

SAP Note 1900795 How to handle HANA alert 36: #Status of Most Recent Data Backup#

SAP Note 1900730 How to handle HANA Alert 37: #Age of most recent data backup#

SAP Note 1900788 How to handle HANA Alert 38: #Status of Most Recent Log Backup#

SAP Note 1977262 How to handle HANA Alert 39: #Check long-running statements#

SAP Note 1977268 How to handle HANA Alert 40: #Total memory usage of column-store tables#

SAP Note 1977230 How to handle HANA Alert 41: #Check the In-memory DataStore activation#

SAP Note 1900261 How to handle HANA Alert 42: #Long-Running Cursors#

SAP Note 1900257 How to handle HANA Alert 43: #Memory Usage of Services#

SAP Note 1977269 How to handle HANA Alert 45: #Check memory usage of main storage of column-store tables#

SAP Note 2079396 How to handle HANA Alert 49: #long-running blocking transaction#

SAP Note 1977162 How to handle HANA Alert 50: #Number of Diagnosis files#

SAP Note 1900261 How to handle HANA Alert 51: #Size of diagnosis files#

SAP Note 1977218 How to handle HANA Alert 52  #Check for new crash dump files#

SAP Note 1977242 How to handle HANA Alert 53: #Pagedump files#

SAP Note 1977220 How to handle HANA Alert 54: #Savepoint Duration#

SAP Note 1977207 How to handle HANA Alert 55: #Columnstore unloads#

SAP Note 1977098 How to handle HANA Alert 56: #Python trace activity#

SAP Note 1977221 How to handle HANA Alert 57: #Secure store file system (SSFS) availability#

SAP Note 1977253 How to handle HANA Alert 58: #Plan Cache Size#

SAP Note 2082406 How to handle HANA Alert 62: #Expiration of database user passwords#

SAP Note 2081857 How to handle HANA Alert 63: #Granting of SAP_INTERNAL_HANA_SUPPORT role#

SAP Note 2081405 How to handle HANA Alert 66: #Storage snapshot is prepared#

SAP Note 2050579 How to handle HANA Alert 68: #total memory usage of row store#

SAP Note 2081360 How to handle HANA Alert 69: #Enablement of automatic log backup#


Friday, January 8, 2016

SAP HANA Password Policy Configuration



Password Policy Configuration Options:

Passwords for the user name/password authentication of database users are subject to certain rules, or password policy.
You can change the default password policy in the Security editor of the SAP HANA studio (recommended) or directly in the password_policy section of the indexserver.ini system properties file.

Note : The password policy parameters for the system database of a multiple-container system are in the namesever.ini file and not the indexserver.ini file.

1. minimal_password_length :

Default Value : 8
Additional Information : You must enter a value between 6 and 64.

2. force_first_password_change :  

Defines whether users have to change their initial passwords immediately the first time they log on Default Value : True

A user administrator (that is, a user with the system privilege USER ADMIN) can force a user to change his or her password at any time with the following SQL statement: ALTER USER FORCE PASSWORD CHANGE

A user administrator can override this password policy setting for individual users (for example, technical users) with the following SQL statement:

CREATE USER PASSWORD [NO FORCE_FIRST_PASSWORD_CHANGE]
ALTER USER PASSWORD [NO FORCE_FIRST_PASSWORD_CHANGE]


3. last_used_passwords : 

The number of last used passwords that the user is not allowed to reuse when changing his or her current password
Default Value : 5
Additional Information : If you enter the value 0, the user can reuse his or her old password.


4. maximum_invalid_connect_attempts :  

The maximum number of failed logon attempts that are possible; the user is locked as soon as this number is reached
Default Value 6
Additional Information You must enter a value of at least 1.

A user administrator can reset the number of invalid logon attempts with the following SQL statement:
ALTER USER RESET CONNECT ATTEMPTS

Note: The first time a user logs on successfully after an invalid logon attempt, an entry is made in the INVALID_CONNECT_ATTEMPTS system view containing the following information:
The number of invalid logon attempts since the last successful logon
The time of the last successful logon

A user administrator can delete information about invalid logon attempts with the following SQL statement:
ALTER USER DROP CONNECT ATTEMPTS


5. password_lock_time :


The number of minutes for which a user is locked after the maximum number of failed logon attempts
Default Value 1440
Additional Information :If you enter the value 0, the user is unlocked immediately. This disables the functionality of parameter maximum_invalid_connect_attempts.
A user administrator can reset the number of invalid logon attempts and reactivate the user account with the following SQL statement:
ALTER USER RESET CONNECT ATTEMPTS.

It is also possible to reactivate the user in the user editor of the SAP HANA studio.
To lock a user indefinitely, enter the value -1. In the Security editor of the SAP HANA studio, this corresponds to selecting the Lock indefinitely checkbox. The user remains locked until reactivated by a user administrator as described above.

2018947 Crash during HANA database start up



Symptom:

The indexserver or other database services crash during the database start up with following call stack:
0: ptime::Transaction::Transaction
1: ptime::TraceUtil::getOidsFromObjects
2: ptime::ExpensiveStatementTraceFilter::load
3: ptime::ExpensiveStatementTracerImpl::loadConfig
4: ptime::Config::startup
5: TRexAPI::TREXIndexServer::startup
6: nlsui_main

Reason and Prerequisites:

The expensive statement trace is turned on and there is a filter on a specific object defined. During start up the tracer tries to use a transaction but the object has not been initialized yet.

Solution: The crash is fixed with SAP HANA revision 81.

Workaround: to be able to start up the database the filter on the objects needs to be removed. Therefore, please remove the related entry in the global.ini. The complete section about expensive statement trace in the global.ini looks similar to this:

[expensive_statement] threshold_duration = 1
user = system
object = sys.m_connections

To solve the situation, the entry "object = …" needs to be removed.

SAP HANA Memory Consumption Parameters



The memory manager section of the global.ini file contains parameters that allow you to control the memory consumption of SAP HANA. In a system that supports multi-tenant database containers, you can configure the global.ini at both the system level and the database level. Parameters configured at the system level apply to the complete system and all databases. Parameters configured at the database level apply to the specified database only

SAP HANA Statement Memory Limit Threshold



The statement memory limit is applied if the current SAP HANA memory consumption exceeds the statement memory limit threshold as a percentage of the global allocation limit.

Parameter Name - statement_memory_limit_threshold

Default Value - 0% (statement_memory_limit is always respected)

About Parameter -

The properties statement_memory_limit and statement_memory_limit_threshold in the memory_manager section of the global.ini configuration file are used to limit the memory that can be allocated with respect to statement execution.

statement_memory_limit_threshold indicates what percentage of the global memory allocation limit must be in use before the specific value of statement_memory_limit is applied. If this memory limit is being applied and a statement execution exceeds it, then the statement is aborted.

How to limit query from consuming all HANA memory



By default , single SAP HANA query can consume 100% of available memory. If you want to safeguard your system from uncontrolled expensive queries then it’s a good idea to limit the memory consumption of single statement per host.

Starting HANA SP08 SAP introduced parameter statement_memory_limit to restrict the query at predefined memory consumption limit.

Queries will be aborted with OOM dump ("compositelimit_oom OR [MEMORY_LIMIT_VIOLATION] ") , if it reaches the limit specified by this parameter – statement_memory_limit .

Requirement: You want to limit query from consuming limitless HANA memory.

Prerequisite: You need database user with INIFILE ADMIN system privilege assigned.You also need statement memory tracking feature enabled. You need to change following parameters , these parameters do not need restart.

Enable Resource Tracking

a. Login into HANA Studio and navigate to the "Configuration" tab.
b. Open "Global.ini–>resource_tracking"
c. Set parameter "enable Tracking" and "memory_tracking" to "on"


global.ini > [resource_tracking] > enable_tracking = on
global.ini > [resource_tracking] > memory_tracking = on




This will populate the MEMORY_SIZE column of expensive statement trace( M_EXPENSIVE_STATEMENTS)

Step:

1> Connect to HANA system and open SQL console in HANA studio
2> Execute following SQL

alter system alter configuration ('global.ini','SYSTEM') SET ('memorymanager', 'statement_memory_limit') = 100 with reconfigure; // This SQL will set the limit to 100 GB.



OR change following parameter.

Administration perspective > Configuration > global.ini > [memorymanager] -> statement_memory_limit = (maximum_memory_allocation_in_gb)

Example of memory cap encountered:

a. A SQL statement was executed which required more the 5 GIGs of working memory which was the CAP set for the test user. The below error message is given to the user.



b. An addition a OOM file diagnosis file is generated at the time the memory limit is encountered.

Statement Memory Limit in SAP HANA

The parameter is called "Statement_memory_limit". This parameter will help ensure a single user’s statement does not cause “Out of Memory” scenarios on the HANA appliance. A system wide “Statement_memory_limit” was introduced with HANA SP8

When reaching the statement memory limit, a dump file is created with "compositelimit_oom" in the name. The statement is aborted, but otherwise the system is not affected. The unit of measure is GB.

Default : 0 (no limit)

Set system level Statement memory limit

a. Login into HANA Studio and navigate to the “Configuration” tab.
b. Open "Global.ini–>memorymanager"
c. Set "statement_memory_limit" = desired limit in MB’s


1> This limit will be system specific depending on the Global Allocation Limit, if the system limit has never been set and the goal is to only enable user specific limits set the limit to a very high percentage of the Global allocation limit. For example if global allocation limit is set to 234496 MB, set the Statement_memory_limit to 200000 MB. The system limit will never be met and basically ignored.



2> The system limit HAS to be set to enable USER limits.
3> Note setting the "Statement_memory_limit_threshold" instead or in addition to the "statement_memory_limit" will DISABLE the USER limit. This has not been determined if a bug with Rev 91 or SAP design.

Enable user level Statement memory limit :

1. Login into HANA studio.
2. Navigate to Security–>Users and select the user the limit is required for.
3. Double click the user and go to the "User Parameter" tab.
4. Click the "+" button to add a new parameter.
5. Select the "Statement Memory Limit" parameter.
6. Enter a the desired CAP in GB and save user. Below example is set to 5 MB.


Allocation limit in SAP HANA



Collectively, all processes cannot consume more memory than the global allocation limit. The allocation limit limits the amount of memory that can be used by individual processes. The value is the maximum allocation limit in MB.

Parameter Name - allocation_limit
Default Value - A missing entry or a value of 0 results in the system using the default settings. The default allocation limit is calculated in the same way as the default global allocation limit.

About Parameter -
The allocation limit limits the amount of memory that can be used by individual processes. The value is the maximum allocation limit in MB.

-- set the global_allocation_limit=2048

hdbsql=> select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
ALLOCATION_LIMIT
-----------------------------
13605.15


-- set the global_allocation_limit=20480

> select round(allocation_limit/1024/1024,2) allocation_limit from m_host_resource_utilization;
ALLOCATION_LIMIT
-----------------------------
20480

SAP HANA Parameters

I am happy to say that I have put to gather some excellent SAP HANA Parameter which will help you to get more detail about SAP HANA parameters. Please check below for HANA Parameters.

FAQ :  What is the purpose of SAP HANA parameters?

SAP HANA parameters having configuration options and that can be used to influence the behaviour of SAP HANA in various ways. there are reasons for setting parameters are:

- Individual security settings
- Workaround for reported bugs
- Influencing CPU and memory resource consumption at host level
- Definition of file and directory locations in HANA

1. SAP HANA - Parameters Related to Memory Consumption

SAP HANA Parameter -  global_allocation_limit
SAP HANA Parameter -  allocation_limit
SAP HANA Parameter -  statement_memory_limit
SAP HANA Parameter -  statement_memory_limit_threshold

2. SAP HANA - Parameters Related to Unloads and loads

SAP HANA Parameter -  unload_lower_bound
SAP HANA Parameter -  unused_retention_period
SAP HANA Parameter -  unused_retention_period_check_interval
SAP HANA Parameter -  tablepreload_write_interval

3. SAP HANA - Parameters Related to Reload and Preloads

SAP HANA Parameter -  reload_tables
SAP HANA Parameter -  tables_preloaded_in_parallel
SAP HANA Parameter -  preload_column_tables

4. SAP HANA - Parameters Related to Shared Memory

SAP HANA Parameter -  keep_shared_memory_over_restart
SAP HANA Parameter -  reuse_shared_memory_for_restart
SAP HANA Parameter -   keep_shared_memory_over_restart_timeout
SAP HANA Parameter -   reuse_shared_memory_for_restart_timeout

Global Allocation Limit in SAP Hana



The global allocation limit limits the amount of memory that can be used by the system as a whole. The value is the maximum allocation limit in MB. Note:In a system that supports multitenant database containers, the global allocation limit configured at the system layer of the global.ini file is always effective regardless of any value configured at the database layer.

Parameter Name - global_allocation_limit
Default Value - A missing entry or a value of 0 results in the system using the default settings. The global allocation limit is calculated by default as follows: 90% of the first 64 GB of available physical memory on the host plus 97% of each further GB. Or, in the case of small physical memory, physical memory minus 1 GB.

About Parameter - The global_allocation_limit parameter is used to limit the amount of memory that can be used by the database. The value is the maximum allocation limit in MB. A missing entry or a value of 0 results in the system using the default settings. The global allocation limit is calculated by default as follows: 90% of the first 64 GB of available physical memory on the host plus 97% of each further GB. Or, in the case of small physical memory, physical memory minus 1 GB.

Change the Global Memory Allocation Limit -

The SAP HANA database preallocates a pool of memory from the operating system over time, up to a predefined global allocation limit. You can change the default global allocation limit in the global.ini configuration file.

Prerequisites - You have the system privilege INIFILE ADMIN.

Procedure :

1> In the Administration editor, choose the Configuration tab.
2> The configuration files that contain the configuration information for the system are displayed.
3> Expand the global.ini configuration file and then the memorymanager section.
4> In the context menu for the global_allocation_limit parameter, choose Change.
5> The Change Configuration Value dialog box appears.
6> Enter a value for the entire system and/or individual hosts.
7> If you enter only a value for the system, it is used for all hosts. For example, if you have 5 hosts and you set the limit to 5 GB, the database can use up to 5 GB on each host (25 GB in total). If you enter a value for a specific host, then for that host, the specific value is used and the system value is only used for all other hosts. This is relevant only for multiple-host (distributed) systems.