Appendix

This chapter describes how to use TAS to configure Tibero.

The example is based on a Linux x86_64 system, and all commands to be executed on TAS and Tibero, except SQL, must be modified according to the OS of the target system.

Note The example in this chapter represents best practices and does not represent the minimum systemrequirements.

For system requirements, refer to "Tibero Installation Guide". TAS only supports Linux and AIX.

Preparing for Installation

This section describes the items that need to be prepared before configuring the database.

Installation Environment

The following table describes the installation environment for the example.

Item
Value

Number of Nodes

2

Internal Node IP

100.100.100.11, 100.100.100.12

TAS PORT

9620

TAS CM PORT

20005

TAS LOCAL CLUSTER PORT

20000

Number of Shared Disks

4

Shared Disk Size

512GB each

Shared Disk Path

/dev/sdc, /dev/sdd, /dev/sde, /dev/sdf

Installation Account

dba

Tibero Installation Path

/home/dba/tibero

  • It is assumed that the required binaries already exist in the installation path.

  • It is assumed that the same shared disk path is seen from all nodes.

Disk Preparation

The following is the process of preparing shared disks. The process must be performed on all nodes.

Disks can be used as shared disks by modifying their privilege or ownership. However, this can cause the issue that a disk name is changed after OS rebooting. To prevent the issue, it is recommended to configure device nodes by using udev.

The following is an example of device nodes configured using udev.

The links displayed as the result of the first ls command in the above example are symbolic links created according to the udev rules.

The following is an example of the udev rules file used to configure device nodes.

The udev rules file must have the rules extension and exist under the /etc/udev/rules.d directory (in Ubuntu 14.04, the directory path varies according to OS).

The above example file finds a device with the specified SCSI_ID (RESULT=="SCSI_ID") from the node of block devices (SUBSYSTEM=="block") with a kernel name of sd? (KERNEL=="sd?") and then configures owner and user privileges and creates the given symbolic link for the device.

To check a device SCSI_ID, execute /lib/udev/scsi_id (in Ubuntu 14.04, the directory path varies according to OS) with the administrator privilege. The following is an example.

Kernel Parameter Setting

To configure Tibero with TAS, set an additional kernel parameter besides kernel parameters described in "Tibero Installation Guide."

The kernel parameter can be set in the following configuration file (in Linux).

Set the kernel parameter as follows:


TAS Instance Installation

This section describes how to configure a TAS instance with two nodes.

Environment Variable Configuration

Configure the environment variables in the user configuration files (.bashrc, etc.) of the OS.

Set the TB_HOME variable to the path where the TAS instance binary is installed on each node. Set the TB_SID variable to a distinct value for each node so that they can be identified uniquely.

The following is an example of configuring the first node. Set TB_SID of the TAS instance and CM_SID of the Cluster Manager on the first node to 'as0' and 'cm0' respectively.

Set TB_SID of the TAS instance and CM_SID of the Cluster Manager on the second node to 'as1' and 'cm1' respectively.

Initialization Parameter Configuration

The initialization parameters for TAS are the same as those for Tibero Active Cluster(TAC) by default.

The following are the additional initialization parameters required for TAS instance.

Parameter
Description

INSTANCE_TYPE

Instance type (set to AS to indicate Active Storage).

TAS_DISKSTRING

Path pattern to use for disk search.

The following are the initialization parameters for node 1.

The following are the initialization parameters for node 2.

Connection String Configuration

The following is an example of setting the connection string configuration file.

Creating and Starting a Disk Space

The following describes how to create and start a disk space.

  1. Before creating a disk space, start the TAS instance on node 1 in NOMOUNT mode first.

  1. Connect to the running instance, and create a disk space.

    To use external data mirroring function like RAID for high availability, the redundancy level must be set to EXTERNAL REDUNDANCY to prevent the use of internal data mirroring.

Once the disk space is created, the TAS instance is stopped automatically and it can be restarted in the NORMAL mode.

  1. Create an as resource configuration file that is required for the Cluster Manager on the node 1 to execute the AS binary.

  1. To configure a cluster with TAS instances, start up the Cluster Manager and add resources. When adding the as resource, specify the file path to the created configuration file in the envfile attribute.

  1. Start up the instance in NORMAL mode.

  1. Connect to the running instance and add a thread to start up the TAS instance on node 2.

  1. Create an as resource configuration file that is required for the Cluster Manager on the node 2 to execute the AS binary.

  1. Start up the Cluster Manager and add resources on node 2.

  1. Start up the TAS instance on node 2.


Tibero Instance Installation

The steps to install and start up a Tibero instance is the same as those for a DB instance without TAS instance, except for specifying the path for the files that will be created.

Environment Variable Configuration

Configure the environment variables in the user configuration files (.bashrc, etc.) of the OS.

Set the TB_HOME variable to the path where the Tibero DB instance binary is installed on each node. Set the TB_SID variable to a distinct value for each node. The following example sets TB_SID of the DB instance on the first node to 'tac0'.

Set the TB_SID of the DB instance on node 2 to 'tac1'. If CM_SID is not set, set it to cm1.

The SID of each instance must be distinct only on the same server. Duplicate SIDs can be used if the instances are on different servers.

Initialization Parameter Configuration

The following parameters must be set to use the TAS instance.

Parameter
Description

USE_ACTIVE_STORAGE

Option to use TAS instance (set to Y).

AS_PORT

LISTENER_PORT of the TAS instance.

The following are the initialization parameters for node 1.

The following are the initialization parameters for node 2.

Connection String Configuration

The following is an example of setting the connection string configuration file.

Creating and Starting Up a Database

The following describes how to create and start a database.

  1. Create a DB resource configuration file that is required for the Cluster Manager to execute the tibero binary.

  1. Create a resource used for tibero clustering on the Cluster Manager that booted when configuring the TAS instance on node 1.

  1. Start up the instance in NOMOUNT mode.

  1. Connect to the instance and create the database.

  1. Restart the instance in NORMAL mode, and then create the UNDO tablespace and REDO THREAD for node 2.

  1. Add a DB resource to the Cluster Manager on node2.

  1. Start up the instance on node 2.


TAS Recommendations

The following are the recommended requirements for TAS.

Initialization Parameter Settings

Set the memory for Active Storage instance to a lower value than the database instance. The following memory settings are recommended for proper operation.

Parameter
Value

TOTAL_SHM_SIZE

1GB or more

MEMORY_TARGET

2GB or more

Disk Space REDUNDANCY Setting

When using an external data replication function like RAID, REDUNDANCY can be set to EXTERNAL. Otherwise, REDUNDANCY should be set to NORMAL. However, it can be set to HIGH if data availability is preferred over performance.

Disk Space Failure Group Setting

It is recommended to place disks that reside in the same physical server or switch in the same failure group since there is a high probability that a switch or cable failure will cause all the disks to be inaccessible. Since Active Storage keeps replicated copies of data in another failure group, data still can be accessed when failure occurs in the failure group. It is recommended to use three or more failure groups.

Disk Space Capacity Setting

Set the disk space capacity properly based on the database capacity. The maximum number of disks for a disk space is 1,024, and the maximum disk space capacity is 16 TB. It is recommended to maintain enough free disk space for data replication during disk failures.

Management Tips for DBA

Use SQL as a SYS user to connect to an Active Storage instance. The current disk space and disk states can be checked by querying the views described in “TAS Information Views”. When there is a disk failure, the disk state in the view is changed to FAIL.

Disk Properties

It is recommended to use the same property settings (size, speed, etc.) for disks in the same disk space. This can be effective for disk striping by balancing the use of each disk.

Last updated