MENU

Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Sunday, January 31, 2016

How to Check HANA Database Size



Here , I explained how to check hana database size by using hdbsql & hdbcons utility.

First Login to HANA Database using hdbsql then run the below queries.

1- The number of used blocks in the data volumes

hdbsql SID=> select sum(allocated_page_size) from m_converter_statistics

SUM(ALLOCATED_PAGE_SIZE)
3497635840

2- The license relevant HANA memory usage

select product_usage from m_license

Note : The first metric is useful to compare HANA with other RDBMS, e.g. what is the compression ratio compared to ordinary row-store databases. The second metric is important e.g. for sizing the hardware.

Checking Database Size Using hdbcons Utility -

xxx200132:HDB:sidadm xyz/usr/sap/SID/HDB02/vadbabc> hdbcons 'dvol info'

SAP HANA DB Management Client Console (type '\?' to get help for client commands)
Try to open connection to server process 'hdbindexserver' on system 'ABC', instance '02'
SAP HANA DB Management Server Console (type 'help' to get help for server commands)
Executable: hdbindexserver (PID: 9104)
[OK]
--
DataVolume #0 (xyz/hdb/SID/data/mnt00004/hdb00002/)
  size= 16760078336
  used= 3464384512

NOTE: We recommend shrinking this DataVolume. Use 'dvol shrink -i 0 -o -v'.
      Please use a percentage higher than 110% to avoid long runtime and performance decrease!

[OK]
--
[EXIT]
--
[BYE]

How to query from HANA Database



Here , I explained about how to query from HANA database by using hdbsql.

Executed the statement via hdbsql -

SID:xyz/usr/sap/SID/HDB02>   hdbsql -n L14 -i 02 -u HANADBXY -p pwd -a

Welcome to the SAP HANA Database interactive terminal.

hdbsql=> \o test.out      – specfying the output file, make sure the output file is touched
hdbsql=>  \i test.sql      – specfying the input file, the sql file created before

1 row selected (overall time 15.39 msec; server time 740 usec)

1 row selected (overall time 14.15 msec; server time 4.203 msec)


hdbsql SID=> \o       – closing the output file
hdbsql SID=> \q       – quitting from hdbsql

SID:xyz/usr/sap/SID/HDB02>  more test.out

"/ABC/ABC14",11909
"/ABC/ABC15",6999691
"/ABC/ABC12A",29544151

SID:xyz/usr/sap/SID/HDB02>

Wednesday, January 27, 2016

Keep Shared Memory Over Restart in HANA



Here, I explained about parameter keep_shared_memory_over_restart which is related to shared memory and how it will work in the time of HANA database startup.

Parameter Name       - keep_shared_memory_over_restart
Default Value           - true
About Parameter      - Is set to true or false, if true the row store data will stay in shared memory.

How to Set Parameter - indexserver.ini -> [row_engine] -> keep_shared_memory_over_restart

If set to true, SAP HANA will keep the row store in memory when the technical preconditions are fulfilled. If set to false, the row store is generally recreated from scratch during a 
startup.

Note : If you run the following on the OS level ipcs -m to view the shared memory you will see the shared memory will have segments set to DEST status and do not go away, even after the HANA services are stopped (Soft shutdown). The reason for this is because the row store data is still in shared memory so that when your HANA services are restarted it won't have to reload the data.


Related Topic - Shared Memory Parameter in HANA

Tuesday, January 26, 2016

Mergedog in HANA



Mergedog is a process which will trigger auto merge based on formula.

FAQ - How Check configuration of mergedog in ini-files ?

In the Configuration tab of SAP HANA Studio filter for 'mergedog'. Expand the section of indexserver.ini and check if parameters are configured as default:

Default Value        -  active = Yes
Default Value        -  check_intervall = 60000 (corresponding to one trigger of mergedog per minute)

To correct the parameter value, double-click on the parameter name and choose Restore Default. This will delete all custom values on system and host level and restore the default value system-wide.

Note: All other parameters in mergedog configuration should only be changed based on recommendations from SAP Support after analyzing your specific case. Typically such a change is not required.

Related Topic - Delta Merge in SAP HANA
Related Topic - Delta Merge Issue in SAP HANA
Related Topic - Auto Merge threshold formula in HANA

Auto Merge threshold formula in HANA



Here is the threshold formula of auto merge and this formula is configured in auto_merge_decision_fun under indexserver.ini.

(((DMS>PAL/2000 or DCC>100) and DRC > MRC/100) or (DMR>0.2*MRC and DMR > 0.001)) and (DUC<0 .1="" 0.05="" or="">=DUC)

DMS : Delta memory size [MB]
PAL : Process allocation limit [MB]
DCC : Delta cell count [million] This refers to the current number of cells in the delta storage of the table.
DRC : Delta row count [million] This refers to the current number of rows in the delta storage of the table.
MRC : Main row count [million] This refers to the current number of rows in the main storage of the table.
DMR : Deleted main rows [million] This refers to the number of deleted records not in delta storage, but marked as deleted in main storage. Merging makes sense if there are many deleted rows.
DUC : Delta uncommitted row count [million] This refers to the number of uncommitted rows in the delta storage of the table.

How to Set Parameter -  indexserver.ini -> auto_merge_decision_fun

Note :  this can find in SAP HANA Administration guide.

If you can match this formular from result of selecting BGRFC_UNIT_TIME from M_CS_TABLES, you will get the idea of auto merge threshold.

Example: 

RECORD_COUNT : 7,039
RAW_RECORD_COUNT_IN_DELTA : 30,980
In this case, the contidion of (DMR>0.2*MRC and DMR > 0.001) dose not meet the requirement.
((8,489 - 7,039)>0.2*8,489 and (8,489 - 7,039) > 0.001)

This is why in this case auto merge will not trigger.

Related Topic - Delta Merge in SAP HANA
Related Topic - Delta Merge Issue in SAP HANA
Related Topic - Mergedog in SAP HANA

Preload Column Tables Parameter in HANA



The following SAP HANA parameters control column loads during SAP HANA startup and on the secondary system of a system replication scenario based on columns loaded into memory before the shutdown.

Parameter Name       -  preload_column_tables
Default Value             -  true

About Parameter      - 

Per default SAP HANA loads the columns into the memory of the secondary system of a system replication scenario during normal uptime. This has the advantage that a reload is not required at failover time. If you want to disable this feature (e.g. because only limited memory is available on the secondary side), you can set the preload_column_tables parameter to ‘false’.

The effect of this parameter depends on the system where it is set.

Primary system: Information about loaded tables is collected and persisted in the system replication related snapshot.
Secondary system: The load information from primary is evaluated and the tables are loaded accordingly.

How to Set Parameter -  global.ini -> [system_replication] -> preload_column_tables


Additional Command - we can check for tables currently part of this reload information using the following hdbcons command.

tablepreload c -f

Reload Preloaded Tables Parameter in HANA



The following SAP HANA parameters control column loads during SAP HANA startup and on the secondary system of a system replication scenario based on columns loaded into memory before the shutdown.

Parameter Name       - reload_tables
Default Value           - true

About Parameter      - 

If set to ‘true’, SAP HANA loads columns into memory during startup, which were located in memory before shutdown. This can be considered as pre-warming in order to make sure that column loads are not required when the table is accessed the first time explicitly.

How to Set Parameter -  indexserver.ini -> [sql] -> reload_tables


Parameter Name       -  tables_preloaded_in_parallel
Default Value            -  5

About Parameter      - 

Number of tables loaded in parallel after startup .

A higher value typically results in quicker reloads, but a higher CPU consumption, so it is a trade-off between load time and resource consumption. If you want to adjust it, you should perform some tests to find an optimal value to fulfil your needs.

How to Set Parameter -  indexserver.ini -> [parallel] -> tables_preloaded_in_parallel