Storage Server Installation and Configuration

This chapter describes how to install a storage server, and register disks and flash devices. It also explains how to create disk space, table space and install DB using SSVR instance on TAS instance.

Node Specifications

All installation and configuration examples in this chapter are based on the specifications of the DB node and Storage node shown in the table below. The specifications of each node are the main factors that affect ZetaData configuration and system performance, so it is important to verify them before starting configuration.

Item
DB Node Specifications
Storage Node Specifications

Menory

256GB

96GB

Disk configuration

2TB NVMe x 4

4TB HDD x 12, 2TB NVMe x 4

There are several ways to check memory and disk capacity.

The following is an example of checking the physical memory capacity of nodes.

$ grep MemTotal /proc/meminfo 
MemTotal: 98707668 kB

The following is an example of checking the memory capacity of a node.

// Some code$ fdisk -l
Disk /dev/sdd: 4000.8 GB, 4000753475584 bytes, 7813971632 sectors 
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes 
I/O size (minimum/optimal): 262144 bytes / 262144 bytes 
Disk label type: dos
Disk identifier: 0x00000000
...
circle-info

Note

DB nodes require a high performance CPU and plenty of memory, and disk is primarily used for OS storage. Storage nodes, on the other hand, focus on storing and serving data, so disk capacity and speed are more important factors. For the same cost, it is recommended to configure the DB node to focus on CPU and memory, and the Storage node to focus on disk and flash devices.


RAID Installation

Case 1. Installing the OS without using RAID

It is assumed that a storage node has twelve 4 TB disks. The OS and the storage server's binary and control file are installed in 400 GB of space allocated to one of the disks.

If RAID is not used, then 3.6 TB of space (excluding the 400 GB where the OS is installed) is configured as a single partition. Therefore, the storage node can be configured by using the 3.6 TB partition of the disk in which the OS is installed, and eleven 4 TB disks.

Case 2. Installing the OS using RAID

It is assumed that a storage node has twelve 4 TB disks, and two of them are configured as RAID 1 mirroring. The OS and the storage server's binary and control file are installed in 400 GB of space allocated to the RAID1.

After installing the OS, divide the remaining 7.2 TB into two separate 3.6 TB disks, and configure RAID 0 for each one. Therefore, the storage node can be configured by using RAID 1 mirroring with two 3.6TB disks and ten 4TB disks, excluding the space where the OS is installed.


Storage node disk configuration

The following is the process of preparing shared disks for installation. The process must be performed on all nodes.

Storage node disks can be used by modifying their permission or ownership. However, this can cause the issue that a disk name is changed after OS rebooting. To prevent the issue, it is recommended to configure storage node disks by using udev.

The following briefly describes udev.

  • Like system software, udev provides device events, manages device node permission, and creates or renames a symbolic link for a network interface or /dev directory.

  • When a device is detected by the kernel, UDEV gathers properties such as the serial number or bus device number from the SYSFS directory to determine a unique name for the device. UDEV keeps track of devices in the /SYS file system based on their major and minor numbers, and utilizes system memory and SYSFS to manage device information.

  • When the kernel raises an event when a module is loaded or a device is added or removed, UDEV follows the rules to set the device filename, create symbolic links, set file permissions, and so on.

The following is an example of storage node disks configured using udev.

The links with /dev/ in the above example are symbolic links created according to the udev rules.

The following is an example of the udev rules file for configuring disks.

The udev rules file must be saved with the .rules extension in the /etc/udev/rules.d folder.

The rule shown with the example means that the user should find the device that matches the specified SCSI_ID(RESULT=="SCSI_ID") among the block device (SUBSYSTEM=="block") nodes with the kernel name (KERNEL=="sda") expressed as sda, and then set the owner and user privileges and then create the given symbolic link.

The device SCSI_ID can be checked by executing /lib/udev/scsi_id (for RHEL 7).

Depending on the OS versions, execute /sbin/scsi_id. This program must be executed with admin privileges, and the following is an example of checking scsi_id.


Kernel Parameter Configuration

For proper operation of ZetaData, several kernel parameters must be configured. The location of the kernel parameter configuration file is as follows.

The following is an example of configuring kernel parameters.

kernel.shmmax parameter

  • It refers to the maximum value in bytes that can be allocated to a single shared memory segment.

  • The value should be more generous than the size of the total shared memory to be allocated to the SSVR instance.

kernel.shmall parameter

  • It refers to the total number of pages of shared memory available system-wide.

  • The value should be set based on the SSVR instance divided by the page size (reason: SSVR instance uses most of the system's shared memory resources generally.)

The following is the command to check the page size of the system.

fs.aio-max-nr parameter

  • It refers to the maximum number of asynchronous I/O requests that the system can handle simultaneously.

  • The value should be set to an appropriate value that monitors actual usage.

fs.file-max parameter

  • It refers to the the maximum number of file descriptors that can be open simultaneously on the system.

  • The value should be adjusted according to the scale of the system.

vm.max_map_count parameter

  • It refers to the maximum number of memory mappings a process can have.

  • When using RDMA over InfiniBand, maps are frequently generated during the process of registering internal library resources and regions to RDMA, so the value should be set generously.

To use a TCP socket for storage and DB node communication, configure the maximum size of the socket buffer as follows.

circle-exclamation


Environment Variables

The following is the the environment variables for using an SSVR instance.

Variable
Description

$TB_HOME

Home directory where a storage server is installed.

$TB_SID

Service ID that identifies a storage server instance.

circle-info

Note

The environment variables used in a SSVR instance are the same as in Tibero.

For more information, refer to "Tibero Installation Guidearrow-up-right".


Initialization Parameters

The initialization parameters to install ZetaData are basically the same as Tibero, and the following parameters should be configured.

Initialization Parameters
Descriptions

SSVR_RECV_PORT_START

Sets the starting port number used for a SSVR instance. (Range: 1024 - 65535)

SSVR_USE_TCP

Sets to "Y" to use TCP protocol instead of the RDMA protocol. Set the same for the initialization parameters of TAS and TAC instances. (Default: N)

SSVR_USE_IB

Sets to "Y" to use RDMA protocol and the value of the USE_ZETA parameter must be "Y". If it is not set, it is set to the opposite value of the SSVR_USE_TCP parameter.

SSVR_USE_FC

Indicates whether to use flash cache. (default: Y)

SSVR_USE_SDM

Indicates whether to use storage data maps. (default: Y)

SSVR_USE_AGNT

It indicates whether to enable the agent process. It is responsible for managing or assisting with internal tasks. It is also responsible for collecting and sending Tibero Performance Monitor (TPM) related performance metrics. (Default: N)

SSVR_USE_TPM

Indicates whether to enable the TPM (default: N).

SSVR_TPM_SENDER_INTERVAL

Sets the interval for checking sender connection status when TPM is enabled. (Default: 50)

SSVR_WTHR_CNT

Indicates the number of working threads that the SSVR instance will use to perform I/O. If not set, it is automatically proportional to the number of CPUs and does not need to be set.

(Range: SSVR_WTHR_CNT> 0)

INSTANCE_TYPE

Indicates the type of instance and is set for each instance.

  • TAS Instance: AS

  • SSVR Instance: SSVR

USE_ZETA

Configures for ZetaData. If not set otherwise, it is set to "Y" only when the IN STANCE_TYPE parameter value is

"SSVR".

The following is an example of setting the SSVR instance initialization parameters for Storage Node #0.

MEMORY_TARGET parameter is the total amount of memory to be used by the SSVR instance.

Due to the high amount of external memory used for connections on InfiniBand libraries (4 MB per connection), it is recommended to set this after consulting with the lab.

The formula is as follows:

TOTAL_SHM_SIZE parameter is the total amount of shared memory to be used by the SSVR instance.

The minimum value can be calculated by first adding the 3 GB required by default to start the SSVR instance, plus 150 MB per 1 TB of disk, and then adding 1 GB to 2 GB of other allowance per 10 GB of disk to the result.

circle-exclamation


Connection Information for Using tbSQL

To connect to a storage server by using tbSQL, configure the connection information in the $TB_HOME/client/config/tbdsn.tbr file.

The configuration method is the same as in Tibero with the exception that DB_NAME is not required.

The following is an example of configuring the file, $TB_HOME/client/config/tbdsn.tbr.


Configuration of SSVR instance

To configure SSVR instance, follow the following steps.

In this example, three storage nodes and two DB nodes will be configured in the following steps.

  1. Create and start an SSVR instance

  2. Create a storage disk

  3. Create a grid disk

  4. Create a flash cache

1. Create and start an SSVR instance

Below is the process of creating an SSVR instance based on the settings in $TB_HOME/config/$TB_SID.tip.

Check the contents of the initialization parameters and create a control file accordingly. Start the SSVR instance in nomount mode and create an SSVR instance. After creation, the SSVR instance is automatically shut down.

Start it in mount mode to restart an SSVR instance.

circle-exclamation

2. Create a storage disk

Storage Disk is the physical disk that will be used by the SSVR instance.

To use the disks on the Storage node, each disk must be registered as a storage disk for the SSVR instance.

The following is the command registering a storage disk.

Parameter
Description

storage disk

{storage-disk-name}

The name of the storage disk to register with the SSVR instance. It only needs to have a unique name within each SSVR instance.

path {path}

The path to the disk on the Storage node.

size {storage-disk-size}

The size of the storage disk to enroll in SSVR. The default unit is bytes, which can be further denominated as K (KiB), M (MiB), G (GiB), T (TiB), P (PiB), or E (EiB).

The unit of capacity for storage disks is 1T (TiB) = 1024G (GiB). The capacity of the disk checked by the fdisk command is calculated as 1TB=1000GB, so you need to set the size of the storage disks by calculating the capacity in units of 1T=1024G.

This is an optional parameter and will contain the total capacity of the device if not specified.

Configure the storage disk capacity to 4 TB minus 400 GB for the disk where the OS, SSVR binary files, and control files are installed. The examples below assume installation on the first disk.

Storage disk information can be viewed through the V$SSVR_STORAGE_DISK view.

3. Create a grid disk

A Grid Disk is a disk that is visible from the outside of an SSVR instance and must be included in a single storage disk.

The following registers a grid disks.

Parameter
Description

grid disk {grid-disk-name}

The name of the grid disk to register with the SSVR instance. It only needs to have a unique name within each SSVR instance.

storage disk

{storage-disk-name}

The name of the storage disk registered with the SSVR instance. It is viewed the name through the V$SSVR_STOR AGE_DISK view.

offset {offset}

Enters an offset for the storage disk. This is an optional parameter and is not recommended. If used, it must be set to a multiple of 32 KB.

size {grid-disk-size}

The size of the grid disk to register with the SSVR instance. The default unit is bytes, which can be further denominated as K (KiB), M (MiB), G (GiB), T (TiB), P (PiB), or E (EiB). It must not be larger than the available capacity of the storage disk. This is an optional parameter, and if not specified, the maximum multiple of 32 KB below the total capacity of the specified storage disk

The following is an example of creating a grid disk using the maximum available capacity of a registered storage disk by omitting the size option.

Grid disk information can be viewed through the V$SSVR_GRID_DISK view.

4. Create a flash cache

To improve I/O performance, register a flash device as a cache by using the following command.

Parameter
Description

path {path}

The prefix of the flash device name.

For example, if there are four flash devices (/dev/flash0, /dev/flash1, /dev/flash2, and /dev/flash4), the value is '/dev/flash'

size {flashcache-size}

The size of flash devices. The capacity unit for flash devices is 1T (TiB) = 1024G (GiB). The capacity of the flash device checked with the fdisk command is calculated as 1TB=1000GB, the size of flash devices must be entered by calculating the capacity in the unit of 1TB (1024 GB). This is an optional parameter and will contain the total capacity of the device if not specified.

Currently, it supports flash devices of the same size. Note that the value must be set to the size of each flash device, not the total size of all flash devices.

The following is an example of creating a flash cache that has a path of "/dev/flash", a start number of 0, 1490GB of a flash device, and four devices. A flash cache cannot be used until a storage server is restarted after a flash cache is created. To use a flash cache, a storage server must be restarted.

Flash cache information can be checked through the V$SSVR_FLASHCACHE view

circle-info

Note

Add two more SSVR instances with the same configuration as in the example above.

This example will use three SSVR instances to create disk space. and it configures the storage disks, grid disks, and flash cache on the other two SSVR instances as well.


TAS/TAC Instance configuration

This section describes how to use SSVR instances on TAS and TAC to create disk space, tablespace, and install the DB.

SSVR Instance Access Information

To use SSVR instance in TAS and TAC instance, SSVR connection information must be configured.

In the "$TB_HOME/client/config/ssdsn.tbr" file, configure the network IP address of the storage node to be used for communication and the port information of the SSVR instance. This SSVR instance connection information needs to be set on all nodes where TAS and TAC instances are installed.

The following is a sample of the $TB_HOME/client/config/ssdsn.tbr file.

The settings include

Item
Description

{storage node IP}

Indicates the IP address of the Stoage node to use.

{port}

Indicates the port number of the SSVR instance to use, which corresponds to SSVR_RECV_PORT_START in the SSVR instance initialization parameters. Set the Storage node IP followed by a "/" separator.

TAS/TAC configuration

The next step is to configure the TAS instance and configure DiskSpace with the grid disks of SSVR.

Configure CM instances and use them with TAS instances for cluster configuration of TAC instances.

  1. Configuring connection information for using TBSQL

  2. Configuring TAS instance on DB node #0 and creating disk space

  3. Configuring CM instance on DB node #0

  4. Staring CM and TAS instance on DB node #0

  5. Adding TAS on DB Node #1 from TAS instance on DB Node #0

  6. Configuring and starting TAS, CM instance on DB node #1

  7. Configuring and starting TAS instance on DB node #0

  8. Adding TAC on DB Node #1 from TAC instance on DB Node #0

  9. Staring TAC instance on DB node #1

circle-info

Note

The order of starting the TAS instance and TAC instance on DB node 1 is not relevant if it is performed after adding the TAS instance and TAC instance on DB node 0.

1. Configuring connection information for using TBSQL

The following is an example of a $TB_HOME/client/con fig/tbdsn.tbr file for connecting to SSVR instances, TAS, and TAC instances using tbSQL.

2. Configuring TAS instance on DB node 0 and creating disk space

The TAS instance connects to the SSVR instance through the SSVR instance's connection information recorded in the $TB_HOME/client/config/ssdsn.tbr file, and identifies each disk by the grid disk name created in the SSVR. The TAS instance recognizes file paths that start with "-" as grid disks in SSVR and can use them in all cases, including creating disk space and adding/deleting disks.

The following is an example initialization parameters and configuration for a TAS instance.

Initialization parameter
Description

AS_SCAN_SSVR_DISK

This is an initialization parameter written in the TAS tip, which indicates whether the SSVR instance uses the disk. If there is no change, it is set to "N", so it must be specified "Y" in the TAS tip to use ZetaData.

The following is an example of the process of creating disk space using a grid disk in an SSVR instance.

AU (Allocation Unit) is a value that indicates the unit of allocation, and the size of the allocation unit that can be set is 4 MB. The striping units and the striping units of the TAS must be multiples of each other to ensure that the array fits and improve performance.

circle-info

Note

For a detailed description of disk space creation, refer to "Starting TAS" in the "Tibero Active Storage Administrator's Guide".

circle-info

Note

REDUNDANCY management is managed on a per FAILGROUP basis. Therefore, it is recommended to configure FAILGROUP for each SSVR instance that is likely to fail at the same time.

circle-exclamation

Disk space information can be checked through V$AS_DISKSPACE view.

3. Configuring CM instance on DB node #0

The CM instance registers the network and cluster and registers TAS and TAC as services to help ensure reliable cluster operations.

The following is an example of setting initialization parameters for CM. For more information, refer to the "Tibero Administrator's Guide".

4. Staring CM and TAS instance on DB node #0

The following is an example of the process of registering a network with CM, registering a cluster, starting a cluster, registering a TAS service, registering a TAS instance, and starting a TAS instance after starting a CM instance.

circle-exclamation

5. Adding TAS on DB Node #1 from TAS instance on DB Node #0

The following is an example of the process of adding DB node #1 TAS from DB node #0 TAS instance.

6. Configuring and starting TAS, CM instance on DB node #1

The following is an example of setting TAS instance initialization parameters for DB Node #1.

The following is an example of initialization parameter settings for CM instance on DB node #1.

The following is an example of starting a CM instance on DB node #1 and going through the process of network registration, cluster registration, cluster startup, TAS service registration, TAS instance registration, and TAS instance startup.

7. Configuring and starting TAS instance on DB node #0

The TAC instance connects to the SSVR instance through the SSVR instance connection information recorded in the $TB_HOME/client/config/ssdsn.tbr file. The TAC instance recognizes file paths that start with a "+" as virtual files managed by the TAS instance. This path can be used for the path of any file, including control files and CM files.

The following is an example of initialization parameters for the configuration of DB node #0 TAC instance as a cluster using TAS. Do not modify the 'DB_BLOCK_SIZE=32K' parameter.

circle-exclamation

The MEMORY_TARGET parameter is the total amount of memory to be used by the TAC instance.

Due to the high amount of external memory used for connections in InfiniBand libraries (4 MB per connection), it is recommended that you consult your lab before setting this parameter.

The formula is as follows:

The following is an example of the process of registering the TAC service and registering the TAC instance on the CM instance of DB node #0 that was started in the previous step.

The following is an example of the process of creating a melt database by starting the TAC instance on DB node #0 in nomount mode and using the disk space of the TAS instance. It will automatically shut down after creation, so restart it.

8. Adding TAC on DB Node #1 from TAC instance on DB Node #0

The following is an example of adding additional configuration on DB Node #0 TAC instance to start DB Node #1 TAC instance.

9. Staring TAC instance on DB node #1

The following is an example of setting the initialization parameters for DB Node #1 TAC instance. Do not modify 'DB_BLOCK_SIZE=32K'.

The MEMORY_TARGET parameter is the total amount of memory to be used by the TAC instance.

Due to the high amount of external memory used for connections in InfiniBand libraries (4 MB per connection), it is recommended that user consults tech service team before setting this parameter.

The formula is as follows:

The following is an example of registering and starting a TAC instance on the DB Node #1 CM instance that was started in the previous step.

Through the above process, configure SSVR instances, TAS instances, and TAC instances on three SSVR nodes and two DB nodes respectively.

circle-info

Note

Using SSVR instances on TAS and TAC instances does not require any configuration other than the access information described earlier.

For more information about installing and setting preferences for TAS and TAC, refer to "Tibero Active Storage Administrator's Guide" and "Tibero Administrator's Guide".


Verifying SSVR instance information

The following views display SSVR instance information. They can be retrieved only in a SSVR instance.

View
Description

V$SSVR_CLIENT

View the clients that the SSVR instance has connections to.

V$SSVR_FLASHCACHE

View the Flash Cache information connected to the SSVR instance.

V$SSVR_GRID_DISK

View the Grid Disk information connected to the SSVR instance.

V$SSVR_STORAGE_DISK

View the Storage Disk information connected to the SSVR instance.

V$SSVR_SLAB_STAT

View the SLAB information being used by the SSVR instance.

V$SSVR_MEMSTAT

View memory information used by the SSVR instance.

V$SSVR_CLIENT

The V$SSVR_CLIENT view shows information about all clients that the SSVR instance has connections to.

Column
Dta type
Description

ADDRESS

VARCHAR(20)

The address of the client.

PORT

NUMBER

The port number that the client is connected to.

NAME

VARCHAR(128)

The name of the client.

THREAD_NUMBER

NUMBER

The thread number responsible for connecting to the client.

The following is an example of V$SSVR_CLIENT.

V$SSVR_FLASHCACHE

The V$SSVR_FLASHCACHE view shows information about all flash caches connected to an SSVR instance.

Column
Data type
Description

FLASHCACHE_NUMBER

NUMBER

The number of the flash cache.

NAME

VARCHAR(32)

The name of the flash cache.

PATH

VARCHAR(256)

The path to the flash cache.

OS_BYTES

NUMBER

The size of the flash cache as recognized by the OS.

circle-info

Note

For an example of viewing V$SSVR_FLASHCACHE, refer to 'Create a flash cache'.

V$SSVR_GRID_DISK

The V$SSVR_GRID_DISK view shows information about all grid disks attached to an SSVR instance.

Column
Data type
Description

GRID_DISK_NUMBER

NUMBER

The number of the grid disk.

NAME

VARCHAR(128)

The name of the grid disk.

STORAGE_DISK_NUMBER

NUMBER

The number of the storage disk that is mapped to the grid disk.

STORAGE_DISK_OFFSET

NUMBER

The offset of the storage disk that is mapped to the grid disk.

TOTAL_BYTES

NUMBER

The size of the grid disk.

circle-info

Note

For an example of viewing V$SSVR_GRID_DISK , refer to 'Create a grid disk'.

V$SSVR_STORAGE_DISK

The V$SSVR_STORAGE_DISK view shows information about all storage disks attached to an SSVR instance.

Column
Data type
Description

STORAGE_DISK_NUMBER

NUMBER

The number of the storage disk.

NAME

VARCHAR(128)

The name of the storage disk.

PATH

VARCHAR(256)

The path of the storage disk.

OS_BYTES

NUMBER

The size of the storage disk as recognized by the OS.

circle-info

Note

For an example of viewing V$SSVR_STORAGE_DISK , refer to 'Create a storage disk'.

V$SSVR_SLAB_STAT

The V$SSVR_SLAB_STAT view shows the SLAB information that the SSVR instance is currently using.

Column
Data type
Description

SLAB_SIZE

NUMBER

The size of the SLAB.

SLAB_GET_CNT

NUMBER

The number of the SLAB.

TOTAL_CHUNK_CNT_NUMBER

NUMBER

The total amount of chunks.

MAX_CHUNK_CNT

NUMBER

The maximum possible number of chunks.

The following is an example of V$SSVR_SLAB_STAT.

V$SSVR_MEMSTAT

The V$SSVR_MEMSTAT view shows memory usage information for an SSVR instance. The units are expressed in MB.

Column
Data type
Description

TOTAL_PGA_MEMORY_MB

NUMBER

The total size of the process memory.

FIXED_PGA_MEMORY_MB

NUMBER

The size of the fixed process memory.

USED_PGA_MEMORY_MB

VARCHAR(128)

The amount of process memory used.

TOTAL_SHARED_MEMORY_MB

NUMBER

The total size of shared memory.

FIXED_SHARED_MEMORY_MB

NUMBER

The size of fixed shared memory.

USED_SHARED_MEMORY_MB

NUMBER

The amount of shared memory used.

The following is an example of V$SSVR_MEMSTAT.

Last updated