Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Access Tibero's technical documentation by topics.
This chapter explains the prerequisite and precautions before installing Tibero client, considerations after installation and Tibero client installation methods on Linux.
The guide contains 2 chapters.
It describes the prerequisite and precautions before installing Tibero client, considerations after installation.
🔎
This chapter explains how to install the Tibero client in manual mode.
🔎
The following is the Tibero manual uninstallation process in a Unix environment.
1. Shut down Tibero.
2. Delete the directory of Tibero as well as all the subdirectories by using the OS's command.
tbdownrm –rf $TB_HOMEThis chapter briefly introduces Tibero and describes the system requirements for installation.
The current enterprise business rapidly expands with the explosive increase of data and the advent of a variety of new environments and platforms. This new business environment requires more flexible and efficient data services, information handling, and data management functions.
Tibero is an enterprise database management system that supports building a database infrastructure on which enterprise business is implemented and provides high performance, high availability, and scalability.
To address limitations of existing databases, Tibero implemented its proprietary Tibero thread architecture. It also uses limited system resources such as CPU and memory efficiently, guarantees high performance and reliability, and provides a convenient development environment and management features.
Tibero, from its very earliest version, has been developed to handle a large number of users and large amounts of data while ensuring reliability and compatibility.
Create client account.
Unzip the binary archive.
Configure the profile.
Modify the content to suit the environment and add it to the profile of the client server.
Copy the license file(license.xml) to the directory, $TB_HOME/license.
Execute gen_tip.sh.
Configure the network file (tbdsn). tbdsn.tbr file configures the server connection path, and it is located in $TB_HOME/client/config. Manually configure tbdsn.tbr as the following guide.
Access to Tibero Server with Tibero Client.
$ useradd -d /home/tibero7_client -g dba tibero7_client
$ passwd tibero7_client$ ls
tibero7-bin-FS02-windows64_2008-269353-20240314021215.zip
$ unzip tibero7-bin-FS02-linux64_3.10-254994.tar.gz```shell
$ vi ~/.bash_profile
export TB_HOME=/home/tibero7_client/tibero7 export TB_SID=tibero
export PATH=$TB_HOME/bin:$TB_HOME/client/bin:$PATH
export LD_LIBRARY_PATH=$TB_HOME/client/lib:$LD_LIBRARY_PATH
$ source ~/.bash_profile$ cd $TB_HOME/config
$ sh gen_tip.sh Using TB_SID "tibero"
/home/tibero7_client/tibero7/config/tibero.tip generated
/home/tibero7_client/tibero7/config/psm_commands generated
/home/tibero7_client/tibero7/client/config/tbdsn.tbr generated.
Running client/config/gen_esql_cfg.sh
Done.$ cd $TB_HOME/client/config
$ ls
gen_esql_cfg.sh tbco_file_id tbdsn.tbr tbimport_hl.config.default tbpcb.cfg
tbc_file_id tbdfdump.cfg tbertl.cfg tbpc.cfg
$ vi tbdsn.tbr
#------------------------------------------------------------
# /home/tibero7_client/tibero7/client/config/tbdsn.tbr
# Network Configuration File.
# Generated by gen_tip.sh at 2023. 06. 12. (월) 14:30:16 KST
tibero=(
(INSTANCE=(HOST=192.168.xxx.xxx)
(PORT=8629)
(DB_NAME=tibero)
)
)/tbsql tibero/tmax
TmaxTibero Corporation Copyright (c) 2020-. All rights reserved.
Connected to Tibero.
SQL> SELECT
IPADDR, USERNAME
FROM
V$SESSION; 2 3 4
IPADDR
----------------------------------------------------------
USERNAME
-----------------------------------------------------------------------------------
192.168.xxx.xxx
TIBEROTibero is a data management solution that manages large amounts of data and guarantees reliable business continuity. It has all of the features needed for an RDBMS environment such as distributed database links, data replication, data clustering, and parallel query processing. Tibero provides thereby an optimal database environment that perfectly meets enterprise needs.
Tibero software distribution policy is as follows:
Full Purchase Version: Licensed by the number of CPUs and features.
Evaluation Version: A license that restricts the trial period and the number of users.
This section describes supported platforms, operating systems, hardware and software requirements.
Tibero supports the following platforms and operating systems.
SUN
SPARC
Solaris 11
64 bits
IBM
PPC
AIX 7.1
AIX 7.2
AIX 7.3
64 bits
GNU
X86
Red Hat Enterprise Linux 7
Red Hat Enterprise Linux 8.1
Red Hat Enterprise Linux 8.2
Red Hat Enterprise Linux 8.3
Red Hat Enterprise Linux 8.4
Red Hat Enterprise Linux 8.5
Red Hat Enterprise Linux 8.6
Red Hat Enterprise Linux 8.7
Red Hat Enterprise Linux 8.8
Red Hat Enterprise Linux 8.9
Red Hat Enterprise Linux 8.10
Red Hat Enterprise Linux 9.3
Red Hat Enterprise Linux 9.4
Hardware requirements for installing Tibero are as follows:
LINUX/x86
1 GB
2 GB
500 MB
2.5 GB / 400 MB
Solaris
1 GB
2 GB
500 MB
2.5 GB / 400 MB
For Solaris platforms, refer to the following when setting swap space.
Recommended setting: twice the physical memory
User setting: (Background process count + Worker process count) * Swap usage per process(300 MB)
Software requirements for installing Tibero are as follows:
LINUX/x86
Red Hat Enterprise Linux 7 kernel 3.10.0 or later
C99 compatible compiler,
gcc version 4.8.5 or later
JDK 1.5.17 or later
Solaris
Solaris 11 64-bit kernel
C99 compatible compiler, Sun C 5.8 2005/10/13
JDK 1.5.17 or later
AIX
AIX 7.1 64-bit kernel
AIX 7.2 64-bit kernel
AIX 7.3 64-bit kernel
C99 compatible compiler
To install using the Tibero Client Installer, a version of JDK 1.5.17 or higher must be installed.
[Figure 1] Check JDK installation
Go to ‘TechNet’ of Tmax, create account and download the installation file.
UNIX, Linux
Download and use tar.gz file from Technet.
After installing Tibero, JDBC is provided to integrate with other solutions.
When integrating JDBC, the required driver file can be found from the following paths on the Tibero DB server. The following is an explanation of the driver file’s path based on the server’s operating system. (File path for Unix environments.)
UNIX
$TB_HOME/client/lib/jar
The following is a description of the driver file name for each version of TIbero.
Tibero 7
tibero7-jdbc.jar
This guide is intended for database administrators (hereafter DBA) who want to use the Hadoop Connector in Tibero.
Databases
RDBMS
Operating systems and system environments
UNIX and Linux
The guide contains 2 chapters.
Describes how to use the Tibero Hadoop Connector.
🔎
Describes how to use the Tibero HDFS Connector.
🔎
This chapter describes how to install and uninstall multiple instances.
This chapter explains the external procedures provided in Tibero.
External Procedure is the functions users write in an external language (any programming language Tibero supports) such as C or Java programming languages. The procedure is treated as a PSM function (or procedure) in Tibero. Users can invoke the procedures when they want to use any user defined library. They can also get the processed result.
The external procedures in Tibero are used:
To issue an event in the DBMS using email, SMS, and network.
To compress a file depending on the size of the entire disk or the file, or to move it to an external disk.
$ oslevel -r
7100-04The guide contains 2 chapters.
It describes the prerequisite and precautions before installing Tibero client, considerations after installation.
This chapter explains how to install the Tibero client in manual mode.
To use data encryption in applications.
The C language or the Java language is used to create external procedures in Tibero.
Executing C External Procedures
Creating C External Procedures
Example
Executing JAVA External Procedures
Creating JAVA External Procedures
Example
Oracle Linux 8.7
Oracle Linux 8.8
Oracle Linux 8.9
Oracle Linux 8.10
Oracle Linux 9.3
Oracle Linux 9.4
Rocky Linux 8.6
Rocky Linux 8.8
Rocky Linux 8.9
Rocky Linux 8.10
Rocky Linux 9.3
Rocky Linux 9.4
ProLinux 7.5
ProLinux 8.5
CentOS 7
64 bits
AIX
1 GB
2 GB
500 MB
2.5 GB / 400 MB
JDK 1.5.17 or later
This chapter describes the concepts and features of the Tibero Hadoop Connector.
Hadoop is an open-source framework solution from Apache Software Foundation to facilitate storage, distribution, and parallel processing of large amounts of data.
Hadoop includes the following software stacks.
HDFS(Hadoop Distributed File System) A distributed file system that provides failure recovery and high availability through data block redundancy in a distributed file system.
MapReduce A distributed programming framework that automatically performs distributed parallel processing on tasks that are divided into Map and Reduce. It supports parallel and distributed processing using resources from multiple nodes and failure recovery function.
In summary, Hadoop is a system aimed at mass storage and fast processing of data.
The number of businesses using Hadoop is increasing rapidly as the amount of data is growing exponentially. However, Hadoop requires a MapReduce program for data processing which creates high programming burden to create various queries needed for data analysis. It lacks an interactive SQL interface with immediate feedback causing inconvenience in having to write code to achieve the desired results.
There are also many cases that require multiple types of data sources to store various data formats.
In such cases, unstructured data is stored in HDFS, and structured data in the existing RDBMS. Combining heterogeneous data sources to analyze legacy database and big data together dramatically increases data processing complexity.
Tibero Hadoop Connector is a solution that satisfies big data processing requirements, and the need for heterogeneous data source integration and convenient interface.
The following Tibero Hadoop Connector features are provided to supplement Hadoop.
Provides Extern Table interface to process data stored in HDFS with data in RDBMS tables.
External Table interface reduces data migration inconvenience.
Supports all query functions of Tibero.
Supports data integration functions such as table joins between HBase and Tibero tables.
Data in Hadoop can be combined with data in Tibero in a query using Ansi-SQL. The access interface between Tibero and Hadoop HDFS is unified in SQL, which reduces the burden of using heterogeneous data sources. Using SQL to perform various queries according to the fast changing data analysis needs facilitates a fast data analysis process.
Tibero Hadoop Connector uses the External Table function to access data so that queries can be performed on various data formats as with structured data. Various functions including query processing functions provided by Tibero InfiniData can also be used with data in Hadoop.
In summary, Tibero Hadoop Connector enables easy integrated analysis of data in Hadoop and RDBMS. Such agile big data analysis functionality can help to quickly respond to the rapidly changing business environment.
Tibero HDFS Connector only supports Linux. HDFS Connector supports Hadoop 1.2.X versions.
The manual installation process of Tibero in a Unix environment is basically the same as in the single instance installation. The steps are as follows:
Install a Tibero instance by referring to “Unix” in installation page.
TB_HOME
/home/tibero/Tibero/tibero7
TB_SID
tibero1
LD_LIBRARY_PATH
$TB_HOME/lib:$TB_HOME/client/lib
PATH
$PATH:$TB_HOME/bin:$TB_HOME/client/bin
Set another TB_SID variable for a second instance. It must be set to a value that is different from the first variable.
Enter the following command from the $TB_HOME/config directory.
This command creates a configuration file (.tip) as well as tbdsn.tbr and psm_commands files.
Open the configuration file created in the $TB_HOME/config directory, and set the LISTENER_PORT value differently from the first instance's setting.
Warning
The initialization parameters _LSNR_SPECIAL_PORT, _LSNR_SSL_PORT and CM_PORT are respectively set to LISTENER_PORT+1, LISTENER_PORT+2, LISTENER_PORT+3 by default.
Therefore, when setting LISTENER_PORT, the values of the initialization parameters LISTENER_PORT, _LSNR_SPECIAL_PORT, _LSNR_SSL_PORT and CM_PORT for the new instance must be set differently from the previously installed instance.
The subsequent steps are the same as in “Unix” in installation page from step 5.
This section describes the uninstallation process of existing multi-instances.
The manual uninstallation process of Tibero multi-instances for Unix is basically the same as in the single instance uninstallation.
TB_HOME
/home/tibero/Tibero/tibero7
TB_SID
tibero1
tibero2
LD_LIBRARY_PATH
$TB_HOME/lib:$TB_HOME/client/lib
PATH
$PATH:$TB_HOME/bin:$TB_HOME/client/bin
Set the TB_SID environment variable of the first Tibero instance to delete, and then shut down the instance.
Set the TB_SID environment variable of another instance to delete, and then shut down the instance.
Delete the Tibero installation directory as well as all the subdirectories by using the OS's command.
Run the mount command.
The method for mounting a CD-ROM on Linux is as follows:
Check whether automounting is working.
Switch to a root user using the root account password.
Run the mount command
Run the unmount command.
The method for mounting a CD-ROM in Solaris is as follows:
1. Check the volume manager to verify whether a CD is recognized automatically.
If there is an existing CD, eject the CD.
3. Insert a CD and mount it with the following commands.
export TB_SID=tibero2gen_tip.shtibero@Tibero:~/Tibero/tibero7/config$ gen_tip.sh
Using TB_SID "tibero2"
/home/tibero/Tibero/tibero7/config/tibero2.tip generated
/home/tibero/Tibero/tibero7/config/psm_commands generated
/home/tibero/Tibero/tibero7/client/config/tbdsn.tbr generated.
Running client/config/gen_esql_cfg.sh
Done.tibero@Tibero:~/Tibero/tibero7/config$ cat tibero2.tip
......Omitted......
DB_NAME=tibero2
LISTENER_PORT=9629
......Omitted......export TB_SID=tibero1
tbdownexport TB_SID=tibero2
tbdownrm –rf $TB_HOME# mkdir /cdrommount <option><device_name><mount_point>$ ps -aux | grep automount$ su -
Password : *****# mount <option><device_name><mount_point># umount <mount_point>$ ps -e | grep vold$ eject# mkdir <mount_point>
# mount <options> <device_name><mount_point>This document is intended for database administrators (DBA) and application developers who want to use the In-Memory Column Store (IMCS) function provided in Tibero
Database
RDBMS
SQL
This guide consists of 4 chapters.
Briefly introduces the concept and features of In-Memory Column Store.
🔎
Describes how to enable or disable In-Memory Column Store and setting objects for In-Memory population.
🔎
Describes high availability features supported with In-Memory Column Store.
🔎
Describes initialization parameters and dynamic performance views relevant for In-Memory Column Store.
This chapter describes how to use Tibero Recovery Manager (RMGR) to backup and recover Tibero database files stored in a TAS disk space.
The following is an example of backing up all data of a TAS disk space to the "/backupdir" directory by using the tbrmgr command.
The following is an example of recovering data of a TAS disk space with the backup copy in the "/backupdir" directory by using the tbrmgr command.
This chapter describes how to use the Tibero HDFS Connector.
The syntax for creating an external table using the HDFS Connector is the same as the general syntax for creating an external table.
In the previous syntax, Port is optional and 8020 is its default value.
The following is an example of defining an external table using the HDFS Connector.
The previous example creates three HDFS files for the external table. The host name of the HDFS Namenode is 'hadoop_name', and the port number defaults to 8020. Next, it reads the three files, 'f0.txt', 'f1.txt', and 'f2.txt', from the '/user/tibero' directory below HDFS.
This chapter describes high availability features supported with IMCS.
You can use DISTRIBUTE to specify how data of an In-Memory object is distributed to each TAC node. The following describes each DISTRIBUTE option.
Option
Description
AUTO
Distributes data equally to each node in accordance with Tibero internal rules. (Default)
By Rowid Range
Distributes data to each node by rowid range. Currently not supported in Tibero.
By Partition
Distributes data to each node by partition. Currently not supported in Tibero..
By Subpartition
Distributes data to each node by subpartition. Currently not supported in Tibero.
You can use DUPLICATE for an In-Memory object to specify whether and how each TAC node stores the copy of data.
Option
Description
NO DUPLICATE
No data copy of In-Memory data. (Default)
DUPLICATE
One copy for each In-Memory object data. When populating an In-Memory object specified with the DUPLICATE option, the same data is stored in 2 TAC nodes. Currently not supported in Tibero.
DUPLICATE ALL
As many copies for each In-Memory data as the number of nodes. Every instance stores the copies of an In-Memory object in its own IMCS area.
Since In-Memory data populated in a TAC environment is distributed across all TAC nodes by default, queries for In-Memory data are executed in parallel. For example, when you perform an In-Memory scan for a table populated in a 3-node TAC environment, the database reads data from every corresponding table stored in each TAC. This means that In-Memory scans in a TAC environment are performed in parallel, with the degree of parallelism (DOP) determined based on the number of nodes.
To query with HDFS Connector, create an external table and execute queries against it.
External Table interface enables the use of all query functions provided by Tibero, join operations with a Tibero table, various aggregate functions and UDF.
The following is an example of using the HDFS Connector to execute a query.
The /*+ parallel */ hint can also be used for parallel execution as with general tables.
Parallel execution can enhance performance since it uses parallel table scans by dividing HDFS files into HDFS block units. However, DML cannot be executed against external tables of HDFS Connector.
tbrmgr backup -o /backupdirtbrmgr recover -o /backupdirhdfs://[HDFS Namenode IP 또는 hostname][:Port]/[File Path]CREATE DIRECTORY TMP_DIR AS '/home/tibero/';
CREATE TABLE HDFS_EXT
( ID NUMBER(10),
TEXT VARCHAR(64)
) ORGANIZATION EXTERNAL
( DEFAULT DIRECTORY
TMP_DIR ACCESS PARAMETERS
(
LOAD DATA INTO TABLE HDFS_EXT
FIELDS TERMINATED BY '|'
(ID, TEXT)
) LOCATION (
'hdfs://hadoop_name/user/tibero/f0.txt',
'hdfs://hadoop_name/user/tibero/f1.txt',
'hdfs://hadoop_name/user/tibero/f2.txt'
)SELECT COUNT(*), AVG(PEOPLE.AGE)
FROM HDFS_EXT, PEOPLE
WHERE HDFS_EXT.ID = PEOPLE.ID
GROUP BY HDFS_EXT.TEXT;SELECT /*+ parallel (8) */ COUNT(*), AVG(PEOPLE.AGE)
FROM HDFS_EXT, PEOPLE
WHERE HDFS_EXT.ID = PEOPLE.ID
GROUP BY HDFS_EXT.TEXT;To install using the Tibero Client Installer, a version of JDK 1.5.17 or higher must be installed.
[Figure 1] Check JDK installation
Go to ‘TechNet’ of Tmax, create account and download the installation file.
After installing Tibero, JDBC is provided to integrate with other solutions.
When integrating JDBC, the required driver file can be found from the following paths on the Tibero DB server. The following is an explanation of the driver file’s path based on the server’s operating system. (File path for Unix environments.)
The following is a description of the driver file name for each version of TIbero.
Tibero 7
tibero7-jdbc.jar
Database
RDBMS
SQL
Platform
HP-UX 11i v3(11.31)
Solaris (Solaris 11)
AIX (AIX 7.1/AIX 7.2/AIX 7.3)
GNU (X86, 64, IA64)
Red Hat Enterprise Linux 7 kernel 3.10.0 or later
Windows(x86) 64bit
The guide contains 5 chapters.
Describes basic concepts of Tibero Spatial.
Describes schema objects related to Tibero Spatial.
Describes spatial indexes provided by Tibero Spatial.
Describes functions provided by Tibero Spatial.
Describes utilities provided by Tibero Spatial.
🔎
A shared library that is dynamically loadable in C language can be called via C external procedures.
User-defined libraries (hereafter User Shared Libraries) are dynamically loaded, not by a Tiberoserver process (server process) but by a tbEPA (Tibero C External Procedure Agent) process.
Because user shared libraries are run in the memory of the tbEPA process, any error generated during execution does not affect server processes.
User shared libraries are treated as a PSM function in the server, thus all the privileges applicable to PSM can be granted to the libraries.
When a C external procedure is called while Tibero server is handling a client request, the tbEPA process waits until the request is completed.
The C external procedure is included in the current transaction, thus it is possible to commit or rollback the transaction depending on the result.
C external procedures are included in the current transaction, thus users can perform query or DML operations on the current transaction using the functions of the shared library.
User shared libraries registered via a C external procedure are dynamically loaded upon request for execution. Such actions are taken not by a server process, but by a tbEPA process.
A tbEPA process is automatically created by a server process when a C external procedure is called. The tbEPA process then receives the various information necessary to call the shared library from the server process. The information needed is; the library location, the function to call, parameter information, and so on.
The tbEPA process dynamically loads the shared library and searches for the function. Based on user entered parameters, the tbEPA process executes the function. When the function has a return value or output parameter, the tbEPA process delivers the value to the server process.
These procedures are automatically processed in Tibero, thus users can write application programs in the same way they write PSM functions.
The following figure illustrates how a C external procedure is executed.
[Figure 1] Execution of a C External Procedure
The C external procedures are executed in the following way.
User specifies a PSM to be run, to execute a user shared library.
When a user created application program calls the user shared library, the PMS module included in the Tibero server checks if the library locates outside the database and then executes an external process. The tbEPA processes are dynamically loaded.
The server process delivers the library information (such as library location, the function to be called, parameter information, and so on.) to any of the tbEPA processes, and waits the processing result.
The tbEPA process dynamically loads a library to call the function. Then it sends the result to the server.
The server analyzes the result received from the tbEPA process, and delivers it to the user-created application program.
The tbEPA process is automatically terminated when the session ends.
This chapter describes how to calculate the size of the shared memory.
The shared memory in Tibero is divided into fixed memory and shared pool memory as follows:
Fixed Memory Its size is fixed based on a parameter value set at database startup.
– Buffer cache (database buffer)
– Log buffer
– Others: space for global variables and managing working threads and session information.
Shared Pool Memory It is dynamically allocated during database operation.
– DD cache
– PP cache (library cache)
– Others
Consider the following when calculating the size of the shared memory.
The size of the shared memory cannot be increased dynamically while the database is running.
The size of the shared memory must be determined properly based on the usage patterns of the buffer cache and shared pool.
- Buffer Cache
Determine the size according to the Buffer Cache Hit rate in TPR after executing major workloads.
If the hit rate is lower than 90%, increase the buffer cache.
- Shared Pool Memory
Determine the size according to the SHARED POOL MEMORY item in v$sga after executing major workloads.
If the shared pool usage is too high (insufficient free area), increase the shared pool memory.
At least 1 MB of the shared pool memory is required for each session.
The size of each area in the shared memory can be checked with v$sga.
The size of the shared memory can be set and queried with the following initialization parameters.
The following example queries the size with the TOTAL_SHM_SIZE parameter.
The following example queries the size with the DB_CACHE_SIZE parameter.
Single mode : ⅔ of TOTAL_SHM_SIZE
TAC mode : ½ of TOTAL_SHM_SIZE
The following example queries the size with the LOG_BUFFER parameter. (the default value of LOG_BUFFER is 10 MB.)
The size can be calculated by subtracting the fixed memory size from the total shared memory size as follows:
Single mode The total shared pool size must be greater than _MIN_SHARED_POOL_SIZE.
TAC mode Sufficient space for Cluster Cache Control (CCC) and Cluster Wait-lock Service (CWS) is required. About 25% of a total buffer cache size is used by shared pools.
The shared pool space excluding space for CCC and CWS must be greater than
_MIN_SHARED_POOL_SIZE.
The free shared pool size for PP and DD caches must be greater than CCC and CWS spaces.
This chapter describes the tbdv utility and its usage.
tbdv is a utility that checks integrity of data files in a Tibero database. It can perform basic integrity checks on data files while the database is offline.
tbdv checks the following for each data block.
Is a block marked with the correct DBA?
Does the block checksum match?
Do the used and available spaces of a block add up to the total block size?
Do the row pieces of a block not overlap with other row pieces ?
If an integrity check discovers an error, it is assumed to be a media failure and a media recovery must be performed on the database.
tbdv is automatically installed and uninstalled along with Tibero.
The following shows how to execute tbdv at a command prompt.
To set the block size, use the -s option (default value: 8,192 bytes).
To check only the specified length ofa file from the beginning, use the CHECK_LENGTH option. If this option is not specified, the entire file is checked. To use a raw device, explicitly set the device size. If not set, an error occurs.
The following executes tbdv.
This section describes how tbdv checks integrity of data files.
If a block is marked with incorrect DBA, tbdv displays the following.
If a fractured block is found in a data file, tbdv displays the following.
If available and used spaces for data blocks does not sum up to the total block size, tbdv displays the following.
tbdv treats a zeroed out block as unformatted. When a block with data is zeroed out due to a disk error, it is treated as unformatted (unallocated space in the data file) and no error is generated.
This chapter describes the concept and features of In-Memory Column Store.
IIn-Memory Column Store (IMCS) is a memory storage architecture optimized for column scans, which stores and manages data copies in a columnar format.
The columnar storage format is supported in addition to the traditional row-oriented storage, to improve database performance in a mixed environment handling both OLTP and OLAP workloads.
[Figure 1] Row-based storage vs Column-based storage
IMCS resides in the In-Memory Area, a component of the shared memory. The size of the In-Memory Area is allocated at startup of the database and remains fixed until the next startup.
Unlike the buffer cache, which discards unused data when filled up, IMCS keeps data permanently in the In-Memory Area unless manipulated through a command by the user. This eliminates additional I/O costs when reading data stored IMCS. (However, if no valid data is found through an in-memory scan, I/O costs may incur because the buffer cache needs to be additionally read).
Columnar Format
IMCS stores copies of data in a columnar format optimized for scans. Data stored in a columnar format has an internally fixed size, which enables efficient filter operation (ex. <, >, =) during a scan.
IMCS eliminates duplicate data by compressing each columnar data. This improves memory efficiency and scan performance by reducing the number of data to be scanned.
IMCS manages each columnar data in a storage unit called In-Memory Compression Unit (IMCU). IMCU contains information about the minimum and maximum values of each columnar data. Based on these minimum and maximum values of data in each IMCU, the database performs IMCU pruning to eliminate unnecessary scans and improve thus the scanning performance.
The In-Memory Area is allocated in a fixed location within the shared memory at startup of the database. The size of the In-Memory Area is set by the INMEMORY_SIZE initialization parameter.
The In-Memory Area is divided into two subpools: In-Memory Compression Unit (IMCU) for columnar data, and Snapshot Metadata Unit (SMU) for metadata of columnar data.
[Figure 2] In-Memory Column Store Architecture
An In-Memory Compression Unit (IMCU) is a storage unit of IMCS that contains data for one or more columns in a compressed format. The size of an IMCU is 1 MB, and IMCS stores data for a single object (ex. table, partition or subpartition) to multiple IMCUs in columnar format.
An IMCU stores columnar data for only one object, and every IMCU contains one or more Column Compression Units (CU).
Column Compression Unit(CU) A Column Compression Unit (CU) is a storage for a single column in an IMCU. A CU encodes the column value of each row in a 2-byte value, which is called a dictionary code. A CU contains a mapping table for each dictionary code corresponding to the actual column data.
A Snapshot Metadata Unit (SMU) stores metadata for an associated IMCU. In IMCS, each IMCU is mapped to a separate SMU. Every SMU contains information about the object and the address of a data block populated in each associated IMCU, as well as a transaction journal, which records rows modified with DML statements.
Population, which converts data into columnar format in IMCS, is performed by background processes. At startup of the database, agent processes initiate population of objects having In-Memory priority. When initially accessing objects with no priority, agent processes populate them by using queries.
Reading columnar data for objects populated in IMCS through queries, or recording transaction journals in SMUs using DML statements are performed by worker processes.
SIMD (Single Instruction, Multiple Data) vector processing enhances performance of expressions in a WHERE clause, by scanning a set of column values in a single CPU instruction. For example, 8 values (2-byte data) from a CU are loaded into the CPU, to get 8 results in a single comparison instruction.
SIMD is used for comparing In-Memory columns to bind parameters or constant values. The following comparison operators are supported: >, >=, =, <, <=, AND
This guide is intended for developers who want to develop applications by using Tibero Data Provider for .NET functions provided by Tibero.
Database
RDBMS
.NET framework
The guide contains 3 chapters.
Introduces Tibero Data Provider for .NET (TDP.NET).
🔎
Describes how to develop an application by using TDP.NET with examples.
Describes API provided by TDP.NET.
🔎
This chapter describes basic concepts of Tibero Spatial.
Tibero Spatial provides functions to save and use spatial data in Geographic Information Systems (GIS) and circuit diagrams.
Tibero Spatial supports seven types of GEOMETRY objects that comply with the SQL-MM standard.
The following table describes each type of GEOMETRY objects. The object is saved as binary in Well Known Binary (WKB) format. Coordinates of GEOMETRY objects in the WKB format are saved as floating point numbers. Therefore, the result of calculations on the numbers may have some errors, and the result values may be different depending on the floating point calculation method.
Tibero Spatial supports the following coordinate systems that calculate coordinates for depicting real space by connecting them with real space. Depending on which coordinate system GEOMETRY belongs to, the result value of Spatial functions may vary.
A coordinate system in which a position in space is expressed on a two-dimensional plane or a three-dimensional space. It is useful when depicting a local space where the curvature of the earth is not revealed. In Spatial, SRID is not specified. GEOMETRY is basically operated in the Cartesian coordinate system. GEOMETRY without a specified SRID is calculated in this Cartesian coordinate system by default.
A coordinate system in which a position in space is expressed on a two-dimensional spherical or ellipsoidal surface (spheroid). This system approaches the actual location through latitude and longitude. It is useful when accurately depicting global space. In general, a sphere or spheroid is used to express the actual position on the earth. The radius, major radius, and minor radius information of a sphere or spheroid used in this system depends on SRID given to GEOMETRY. You cannot individually store different spheroids for each GEOMETRY.
Spatial provides SRID, which is the identifier type of the coordinate system. SRID has information about a specific coordinate system, including whether the coordinate system for GEOMETRY is the Cartesian coordinate system or the spherical coordinate system.
This guide is intended for database administrators (DBA) who intend to use Tibero. This guide describes the program structure and usage of tbCLI.
Database
RDBMS
SQL
The guide contains 5 chapters.
Describes the components and program structure of tbCLI.
🔎
Describes the data types used by tbCLI programs.
🔎
Describes the functions supported by tbCLI.
🔎
Describes the errors that can occur in tbCLI.
🔎
This guide is intended for developers who develop TEXT applications and administrators who maintain TEXT systems using Tibero® (hereafter Tibero).
Database
RDBMS
SQL
This guide consists of 4 chapters.
Describes how to create and manage TEXT indexes.
🔎
Describes queries that use the Tibero TEXT function.
🔎
Describes elements related to Tibero TEXT indexes.
🔎
Gives the default list of stopwords used in Tibero TEXT.
🔎
This chapter describes the failover and load balancing functions provided by tbJDBC.
tbJDBC provides capability to recover some resources automatically when you initiate a connection in a typical application or TAC configuration, or when the connection is lost during a job processing. This capability is referred to as failover.
To use the failover capability, specify the URL of DriverManager.getConnection() in a description format and configure FAILOVER.
Set FAILOVER to one of the following values.
The following example enables failover. When an attempt to connect to svr1 fails, an automatic reattempt occurs to connect to svr2. If all attempts to connect to every server fail, an error occurs.
The following property specifies the maximum number of reattempts to reestablish a connection when the failover capability is enabled.
failover_retry_count
The following properties prevent a client from waiting for a connection infinitely when the client cannot detect a disconnection because of an error.
login_timeout
read_timeout
self_keepalive
tbJDBC provides also an ability to distribute users across multiple nodes in a typical application or TAC configuration, to increase the efficiency of database servers. This ability is referred to as load balancing.. To use the load balancing capability, specify the URL of DriverManager.getConnection() in a description format, and add (load_balance=on).
The following example enables load balancing to distribute users to svr1 and svr2 when initiating connection to servers.
This chapter describes the SSL function provided by tbJDBC and its usage.
Data stored in Tibero may contain sensitive information such as a personal identification number (a resident registration number) or a bank account number. This information can by potentially compromised by an unauthorized user, therefore a particular measure to authenticate the servers and clients is required to prevent any unauthorized access. To this end, tbJDBC provides Secure Sockets Layer (SSL).
The SSL function of tbJDBC has been implemented using the Java Secure Socket Extension (JSSE) package. Information between tbJDBC and the Tibero server can be protected with this function.
To use the SSL function, the Tibero server and tbJDBC must be set up. After the two are successfully set up, all information exchanged between the Tibero server and tbJDBC can be protected without any other additional processes.
Tibero servers must be set up according to the following steps.
Check whether the authentication files exist.
In the $TB_SID.tip file, add the following initialization parameters.
Restart the Tibero server.
tbJDBC must be set up as follows:
This chapter describes the options of system.sh (vbs).
The following are the options that can be used with system.sh(vbs).
The following describes each option.
This chapter describes how to calculate the size of the undo tablespace.
If data is updated frequently in a system, a large volume of I/O happens in the undo tablespace. In such systems, data needs to be distributed by creating the undo tablespace with multiple data files.
The undo tablespace is automatically managed, providing the following advantages.
Helps design and manage efficiently with rollback capability and segments
Prevents undo data from being overwritten
Database
RDBMS
Java
The guide contains 11 chapters.
Tibero
Operating systems and system environments
Unix and Linux
This guide consists of 7 chapters.
POINT
Consists of one point. Zero-dimension.
LINESTRING
Consists of two or more points.
POLYGON
Consists of one external ring and zero or more internal rings.
A ring has the same start and end points, and it indicates a line string that has no internal intersecting point. Intersections of internal rings must be a point or there must be no intersection.
MULTIPOINT
Consists of one or more points. Zero-dimension.
MULTILINESTRING
Consists of one or more line strings.
MULTIPOLYGON
Consists of one or more polygons.
GEOMETRYCOLLECTION
Consists of one or more GEOMETRY objects.
Hardware
At least 2.5GB hard disk space
More than 1 GB RAM
Compiler
PSM (C99 support needed)
tbESQL/C (C99 support needed)
Platform
Windows(x86) 64bit
Hardware
At least 2.5 GB hard disk space
More than 1 GB RAM
Compiler
PSM (C99 support needed)
tbESQL/C (C99 support needed)
At least 2.5GB hard disk space
More than 1 GB RAM
Compiler
PSM (C99 support needed)
tbESQL/C (C99 support needed)
Describes how tbCLI can interoperate with ODBC.
Platform
HP-UX 11i v3(11.31)
Solaris (Solaris 11)
AIX (AIX 7.1/AIX 7.2/AIX 7.3)
GNU (X86, 64, IA64)
Red Hat Enterprise Linux 7 kernel 3.10.0 or later
Windows(x86) 64bit
Hardware

NONE
Disable failover. (Default value)
SESSION
Reestablish a connection by trying to reconnect to another node after a failed
connection.
CURSOR
Recovers java.sql.ResultSet including SESSION. Failover set to this value is available when:
executing a SELECT statement
the ResultSet type is TYPE_FORWARD_ONLY or TYPE_SCROLL_INSENSITIVE
the ResultSet concurrency type is CONCUR_READ_ONLY
using only reusable parameters (volatile data such as java.io.Reader and java.io.InputStream data or string data with the size that is greater than the maximum VARCHAR size cannot be failed over.)
Creates a system user and grant permissions. For the initial installation, select 'Y'.
Y: create a system user and grant permissions.
N: retain data.
-a2
Creates a table for a profile that can specify the database user password management policy. For the initial installation, select 'Y'.
Y: delete the existing table and create a new one.
N: retain data.
For more information about the profile, refer to "User Management and Database Security" in Tibero Administrator's Guide.
-a3
Registers a job schedule for collecting database object statistics. For the initial installation, it is recommended to select 'Y'.
Y: delete the job for statistics collection and then register.
N: do not register.
-a4
Creates TPR related tables for regular, automatic collection of statistics. Select 'Y' for the initial installation.
Y: delete the existing table and then create a new one.
N: retain data.
For more information about the APM, refer to "Tibero Performance Repository" in Tibero Administrator's Guide.
pkgonly
Executes only PSM built-in package scripts.
-sod
Divides the administrator role into 3 accounts for security purposes.
SYSSEC: Security Administrator
SYSAUD: System Auditor
SYSADM: System Administrator
(Default value: N, Not supported in Windows)
error
Prints error details in the terminal when an error occurs from a specific script while executing system.sh. (Not supported in Windows)
-h
Displays the help screen.
-p1 password
Enters the password of the SYS account. (Default value: tibero)
-p2 password
Enters the password of the SYSCAT account. (Default value: syscat)
-a1
Provides scalability
Supports Flashback queries
Consider the following when calculating the size of an undo tablespace.
Data file size, UNDO_RETENTION, and undo segments
Individual spaces in an undo segment are automatically managed, therefore you only need to set the minimum/maximum number of undo segments and the maximum size of a data file.
Set the minimum number of undo segments (USGMT_ONLINE_MIN) to 10 (default value). However, if there are too many concurrent transactions, overhead occurs to create additional undo segments. To decrease the overhead, set USGMT_ONLINE_MIN to the number of concurrent transactions.
If USGMT_ONLINE_MIN is greater than the number of existing undo segments, additional undo segments are created at boot time. USGMT_ONLINE_MIN can be set by node in TAC. For example, USGMT_ONLINE_MIN can be set to 30 and 40 for NODE1 and NODE2 respectively.
Set the maximum number of undo segments (USGMT_ONLINE_MAX).
Tibero automatically creates additional undo segments to prevent multiple transactions from sharing a single undo segment. However, the number of undo segments cannot exceed USGMT_ONLINE_MAX.
In TAC, a total number of Undo segments in all nodes cannot exceed USGMT_ONLINE_MAX. For example, if USGMT_ONLINE_MAX is set to 300 and the numbers of undo segments are 100 and 200 in NODE1 and NODE2 respectively, additional undo segments cannot be created in both nodes. Therefore, it is recommended to set USGMT_ONLINE_MAX to the number of maximum concurrent transactions.
The minimum size, expected size (based on the expected concurrent transactions count), and actual size of an undo tablespace can be calculated as follows. The expected size is used when a database is installed for the first time, and then the size can be checked based on the number of Undo blocks per second under the heaviest TPR load.
The following are the formulas for calculating the size of an undo tablespace.
Expected size
Actual size
The following describes each item in the formulas.
The minimum number of undo segments for each undo tablespace
USGMT_ONLINE_MIN value for both single instance and TAC.
The maximum number of undo segments for each undo tablespace
Expected maximum number of concurrent transactions.
Single instance: USGMT_ONLINE_MAX value
TAC: USGMT_ONLINE_MAX / the number of nodes
_USGMT_UNIFORM_EXTSIZE
Size of an undo extent. An undo segment includes at least two undo extents.
Undo blocks per second
Number of undo blocks divided by a TPR interval (seconds). Use the value of undo blocks shown in Undo Statistic of TPR at the time under the heaviest load.
UNDO_RETENTION
Setting value in a tip file. (Default value: 900 seconds)
Margin
Preparative for irregularly long transactions (including INDEX rebuild). (Unit: KB)
SQL> select * from v$sga;
NAME TOTAL USED
-------------------- --------- ---------
SHARED MEMORY 536870912 536870912
FIXED MEMORY 430875880 430875880
SHARED POOL MEMORY 105992872 40974968
SHARED POOL ALOCATORS 1 1
Database Buffers 357892096 357892096
Redo Buffers 10485760 10485760SQL> show param total_shm_size
NAME TYPE VALUE
---------------- -------- ---------
TOTAL_SHM_SIZE INT64 536870912
SQL> show param db_cache_size
NAME TYPE VALUE
---------------- -------- ---------
DB_CACHE_SIZE UINT64 357892096SQL> show param log_buffer
NAME TYPE VALUE
---------------- -------- ---------
LOG_BUFFER UINT32 10485760TOTAL_SHM_SIZE - [Fixed Memory]The default value of _MIN_SHARED_POOL_SIZE: 1MB * MAX_SESSION_COUNT[Total shared pool size] = _MIN_SHARED_POOL_SIZE + [CCC space] + [CWS space]
+ [Free shared pool size][Free shared pool size] > ([Total shared pool size] - _MIN_SHARED_POOL_SIZE) / 2$ tbdv [-s BLKSIZE] [-l CHECK_LENGTH] /path/to/datafile$ tbdv df1.dtf
==================================================================
= Database Verifier (DV) starts =
= =
= TmaxData Corporation Copyright (c) 2008-. All rights reserved. =
==================================================================
Verifying 'df1.dtf'...
Verifying complete.
Total blocks: 1152
Processed blocks: 1063
Empty blocks: 89
Corrupt blocks: 0$ tbdv df1.dtf
==================================================================
= Database Verifier (DV) starts =
= =
= TmaxData Corporation Copyright (c) 2008-. All rights reserved. =
==================================================================
Verifying 'df1.dtf'...
block #2351 is misplaced.dba differs (expected=16779567, real=16779551)
Verifying complete.
Total blocks: 2433
Processed blocks: 2343
Empty blocks: 90
Corrupt blocks: 1$ tbdv df1.dtf
==================================================================
= Database Verifier (DV) starts =
= =
= TmaxData Corporation Copyright (c) 2008-. All rights reserved. =
==================================================================
Verifying 'df1.dtf'...
block #2311 isn't consistent.
Verifying complete.
Total blocks: 2433
Processed blocks: 2343
Empty blocks: 90
Corrupt blocks: 1$ tbdv df1.dtf
==================================================================
= Database Verifier (DV) starts =
= =
= TmaxData Corporation Copyright (c) 2008-. All rights reserved. =
==================================================================
Verifying 'df1.dtf'...
block #2004 has incorrect freespace.
Verifying complete.
Total blocks: 2433
Processed blocks: 2343
Empty blocks: 90
Corrupt blocks: 1Connection conn =
DriverManager.getConnection("jdbc:tibero:thin:@(description=(failover=session)" +
"(address_list="+
"(address=(host=svr1)(port=8629))"+
"(address=(host=svr2)(port=7629))"+
")(DATABASE_NAME=TACUAC))", "tibero", "tmax");Connection conn =
DriverManager.getConnection("jdbc:tibero:thin:@(description=(load_balance=on)"
+
"(address_list="+
"(address=(host=svr1)(port=8629))"+
"(address=(host=svr2)(port=7629))"+
")(DATABASE_NAME=TACUAC))", "tibero", "tmax");$TB_HOME/client/ssl/certs/certificate.pem
$TB_HOME/client/ssl/private/private.keyCERTIFICATE_FILE="$TB_HOME/client/ssl/certs/certificate.pem"
PRIVATE_KEY="$TB_HOME/client/ssl/private/private.key"Connection conn =
DriverManager.getConnection("jdbc:tibero:thin:@(description=(protocol=tcps)" +
"(address=(host=svr1)(port=8629))"+
"(DATABASE_NAME=dbsvr))", "tibero", "tmax");$ system.sh -h
Usage: system.sh [option] [arg]
-h : dispaly usage
-p1 password : sys password
-p2 password : syscat password
-a1 Y/N : create default system users & roles
-a2 Y/N : create system tables related to profile
-a3 Y/N : register dbms stat job to Job Scheduler
-a4 Y/N : create TPR tables
pkgonly : create psm built-in packages only
-sod Y/N : separation of duties
error : print error whenever sql encounters error(The minimum number of undo segments for each undo tablespace
* _USGMT_UNIFORM_EXTSIZE * 2 * Block size (8KB)) + Margin(The maximum number of undo segments for each undo tablespace
* _USGMT_UNIFORM_EXTSIZE * 2 * Block size (8KB)) + Margin(Undo blocks per second * UNDO_RETENTION * Block size (8KB)) + MarginDatabase
RDBMS
SQL
Eclipse tools
Platform
HP-UX 11i v3(11.31)
Solaris (Solaris 11)
AIX (AIX 7.1/AIX 7.2/AIX 7.3)
GNU (X86, 64, IA64)
Red Hat Enterprise Linux 7 kernel 3.10.0 or later
Windows(x86) 64bit
This guide consists of 6 chapters.
Introduces tbSQL, which processes SQL commands interactively, and describes how to use it.
Introduces tbExport, which extracts part or all of database objects in Tibero, creates an extract file, and describes how to use it.
Introduces tbImport, which stores database objects extracted by tbExport in Tibero, and describes how to use it.
Introduces tbLoader, which loads a large volume of data to Tibero, and describes how to use it.
Introduces tbdv and describes how to use it.
🔎
Describes functions used to call Tibero utilities from applications.
🔎
This chapter describes how to solve problems that can occur after Tibero is installed.
If the property TB_HOME is not set or is set improperly, or if the user configuration file is not applied after Tibero is installed, this message may be displayed when trying to connect to the tbSQL utility.
Check the TB_HOME property in the user configuration file, change the value if necessary, and then apply the file to the system.
If the property LD_LIBRARY_PATH is not set or is set improperly, or if the user configuration file is not applied after Tibero is installed, this message may be displayed when trying to connect to the tbSQL utility.
Check the LD_LIBRARY_PATH property in the user configuration file, change the value if necessary, and then apply the file to the system.
If TB_SID is not set, this message may be displayed when the tbboot or tbdown command is executed.
Check the TB_SID property in the user configuration file, change the value if necessary, and then apply the file to the system.
The following describes how to resolve issues that can occur after Tibero is installed in a TAC environment.
If a configuration file (.tip) cannot be found when a node is added, this message may be displayed.
Check whether the Tibero instance's $TB_SID.tip exists in the corresponding path by referring to TB_HOME and TB_SID.
To add a node, an existing node's setting values are necessary. If trying to access an existing node fails, this message may be displayed.
Check that TB_HOME and TB_SID are set properly.
Check that the existing node can be accessed through tbsql.
If an existing node does not work, start the node with the tbcm, tbboot commands then try to add the node again.
If transferring Tibero installation files to a new node fails, this message may be displayed.
Check whether the current node can access the node to be added using ping or ssh.
If the current node cannot access the node to be added, check the settings for account synchronization, networking, and any firewalls.
The SYSSUB tablespace, which contains TPR-related tables, was added starting from Tibero 5.0 r61295. During the system schema configuration portion of installing and patching, the syssub001.dtf file is automatically created in the directory $TB_ HOME/database/$TB_SID. However, if the property DB_CREATE_FILE_DEST is set to a specific location, the file is created at that location.
However, in an environment using a TAC raw device, the file is created on the local disk of the node processing the system schema. This results in an error that prevents other nodes from accessing the SYSSUB tablespace. The same error occurs in a shared disk environment when the property, DB_CREATE_FILE_DEST, is not set correctly.
Recreate the SYSSUB tablespace as follows:
Delete the existing tablespace.
Create a new tablespace.
Execute the following script.
This error occurs when the specified port number is already in use when executing the tbboot command.
The dbtimezone is the reference timezone value used for storing the timestamp with local timezone data to database. Since this value is internally processed in the database, it is not recommended to modify it, because this requires unnecessary operations leading to performance degradation.
This chapter describes Spatial indexes provided by Tibero Spatial. The indexes are implemented in RTREE format and used to improve the performance of spatial queries. There are indexes for the planar
The following describes how to create a Spatial index.
Spatial indexes have the following constraints.
A single index cannot be used for multiple columns.
A Spatial index cannot be partitioned.
A Spatial index cannot be created for non-GEOMETRY type columns.
A Spatial index can be used only when the following functions are implemented in a WHERE clause of SQL statements. - ST_CONTAINS - ST_COVEREDBY - ST_COVERS - ST_CROSSES - ST_DWITHIN - ST_EQUALS - ST_INTERSECTS - ST_OVERLAPS - ST_TOUCHES - ST_WITHIN
A Spatial index of the spheroid coordinate system can be created by adding a constraint to the GEOMETRY column of the table.
In some cases, it is necessary to change the coordinate system of the existing GEOMETRY column and create a Spatial index. If there are existing constraints, delete them, change the SRID of GEOMETRY, and then add appropriate constraints for the SRID to create the Spatial index.
The following describes how to drop a Spatial index.
The method is the same as for other indexes.
This guide is intended for database administrators (DBA) who choose to refer to tbPSM, the persistent stored module provided by Tibero® (Tibero), and application program developers.
Database
RDBMS
SQL
tbPSM
The guide contains 8 chapters.
This chapter describes functions related to row sets provided by tbJDBC.
A row set is an object that includes a set of row data. It can be accessed using the javax.sql.RowSet object.
There are three kinds of row sets.
Cached Row Set(tbJDBC currently provides this row set only.)
JDBC Row Set
Web Row Set
Multiple listeners can be registered to a row set. To register a listener, the addRowSetListener() method is used, and to delete a listener, the removeRowSetListener() method is used. The listener must be implemented using the javax.sql.RowSetListener interface.
The events of the RowSetListener interface are used as follows:
Create a listener class.
Register the listener to a list of RowSet objects.
A Cached Row Set is a row set that stores all rows in a cache, and is not connected to the database. In tbJDBC, a Cached Row Set is provided as the TbCachedRowSet class.
A RowSet object is initialized using the execute() method, and can then be used in the same way as a java.sql.ResultSet object.
This section describes two ways to create a RowSet object.
The following example illustrates how to create a RowSet object using a query.
Create a TbCachedRowSet.
Specify the URL, user name, password, and query, and then create a RowSet object by calling the execute() method.
The following example shows how to create a RowSet object using an existing result set.
Create a TbCachedRowSet object.
Create a RowSet object by calling the populate() method with a result set object (rset).
It is possible to search rows by moving forward or backward with a RowSet object.
The following is an example.
It is also possible to insert, delete, or modify rows. The acceptChanges() method must be called to commit any changes.
The following is an example.
The restrictions on row sets are the same as those for updatable result sets.
This chapter explains how to install and uninstall the Tibero client in manual mode.
Prepare the installation file. When you create a folder to install the Tibero client and unzip it, folders are created.
Configure the invironment variables.
(1) In Windows, select [Edit system environment variables] -> [Advanced] -> [Environment variables] and configure [System variables].
[Figure 2] Configuring system variables
(2) In the [Environment variable], select [System variables] -> [New] and configure TB_HOME and TB_SID.
[Figure 3] TB_HOME and TB_SID configuration
(3) Unzip the compressed binary executable file(tar.gz) in the directory ‘C:\Tibero'. Then, copy the license file(license.xml) into the subdirectory named “license’ under %TB_HOME%.
(4) After executing tbinstall.vbs, and check the registry registration.
[Figure 4] installation complete message
In the registry (regedit), check whether TB_HOME and TB_SID are registered in the HKEY_LOCAL_MACHINE\SOFTWARE\TmaxTibero\Tibero\tibero path.
[Figure 5] TB_HOME, TB_SID registration check
(5) Execute gen_tip.bat.
(6) Delete any unnecessary folders except ‘bin’ and ‘client’ folders.
(7) Configure the network file(tbdsn).
(8) Access to Tibero Server with Tibero Client.
Execute tbuninstall.vbs and remove it from the registry.
[Figure 6] Uninstallation complete message
Delete physical files. Uninstall the Tibero client by deleting the directory in the path set to %TB_HOME%.
This chapter explains what JDBC is, and introduces Tibero JDBC (hereafter tbJDBC) provided by Tibero.
CREATE INDEX index_name on [schema_name.]
table_name ON col_name RTREEHardware
At least 2.5GB hard disk space
More than 1 GB RAM
Compiler
PSM (C99 support needed)
tbESQL/C (C99 support needed)
At least 2.5GB hard disk space
More than 1 GB RAM
Compiler
PSM (C99 support needed)
tbESQL/C (C99 support needed)
Platform
HP-UX 11i v3(11.31)
Solaris (Solaris 11)
AIX (AIX 7.1/AIX 7.2/AIX 7.3)
GNU (X86, 64, IA64)
Red Hat Enterprise Linux 7 kernel 3.10.0 or later
Windows(x86) 64bit
Hardware

index_name
Name of a Spatial index.
schema_name
Owner of a table for which an index is created.
table_name
Name of a table for which an index is created.
col_name
Name of a GEOMETRY type column for which an index is created.
RTREE
Indicates an RTREE index.
index_name
Name of an index to drop
cursorMoved
Occurs whenever a row is moved using the next() or previous() method.
rowChanged
Occurs when a new row is added, or an existing row is modified or deleted.
rowSetChanged
Occurs when the entire row set is created or modified.
TBR-2048 : Data source was not found.tbsql: error while loading shared libraries: libtbcli.so:
cannot open shared object file: No such file or directoryERROR: environment variable $TB_SID is not settbdown: environment variable TB_HOME or TB_SID is not set.Tip file open failure.: No such file or directorytbdown failed. proc info file is deleted.tip file does not exist / reading tip file failed / malformed tip fileextracting information from the existing cluster failedRemote file transfer failed! / Remote configuration export failed!SQL> DROP TABLESPACE SYSSUB INCLUDING CONTENTS AND DATAFILES;SQL> CREATE TABLESPACE SYSSUB DATAFILE '<SYSSUB path>/syssub001.dtf' ...;$TB_HOME/scripts/tpr.sql
$TB_HOME/scripts/pkg/_pkg_dbms_tpr.tbwListener port = 8629
bind() failed: Address already in use.
Error: Timedout while trying to open port 8629
Check if there are any Tibero instances running.
Tibero instance startup failed!SQL>CREATE TABLE GIS (ID INTEGER PRIMARY KEY, GEOM GEOMETRY);
Table 'GIS' created.
SQL>CREATE INDEX RT_IDX_GIS ON GIS(GEOM) RTREE;
Index 'RT_IDX_GIS' created.
INSERT INTO GIS VALUES (101, ST_GEOMFROMTEXT('POINT(1 1)'));
INSERT INTO GIS VALUES (102, ST_GEOMFROMTEXT('MULTIPOINT(1 1, 2 2)'));
INSERT INTO GIS VALUES (103, ST_GEOMFROMTEXT('LINESTRING(1 1, 2 2)'));
INSERT INTO GIS VALUES (104, ST_GEOMFROMTEXT('MULTILINESTRING((1 1, 2 2),
(3 3, 4 5))'));
INSERT INTO GIS VALUES (105, ST_GEOMFROMTEXT('POLYGON((1 1, 2 1, 2 2, 1 2,
1 1))'));
INSERT INTO GIS VALUES (106, ST_GEOMFROMTEXT('POLYGON((0 0, 0 12, 12 12, 12 0,0 0),
(6 10, 6 11, 9 11, 9 10,6 10), (6 3, 6 6, 9 6, 9 3,6 3))'));
INSERT INTO GIS VALUES (107, ST_GEOMFROMTEXT('MULTIPOLYGON(((1 1, 2 1, 2 2, 1 2, 1 1)),
((3 3, 3 5, 5 5, 5 3, 3 3)))'));
INSERT INTO GIS VALUES (108, ST_GEOMFROMTEXT('GEOMETRYCOLLECTION(POINT(1 1),
LINESTRING(2 2, 3 3))'));
INSERT INTO GIS VALUES (109, ST_BOUNDARY(ST_GEOMFROMTEXT('POINT(10 10)')));
INSERT INTO GIS VALUES (110, ST_GEOMFROMTEXT('GEOMETRYCOLLECTION(POINT(1 1),
LINESTRING(2 2, 3 3))'));
COMMIT;ALTER TABLE table_name ADD CONSTRAINTS
constraint_name CHECK(ST_SRID(col_name)=srid);SQL>DROP INDEX RT_IDX_GIS;
Index 'RT_IDX_GIS' dropped.
SQL>UPDATE GIS SET GEOM=ST_SETSRID(GEOM,4326);
10 rows updated.
SQL>ALTER TABLE GIS ADD CONSTRAINTS SRID4326 CHECK(ST_SRID(GEOM)=4326);
Table 'GIS' altered.
SQL>CREATE INDEX RT_IDX_4326 ON GIS(GEOM) RTREE;
Index 'RT_IDX_4326' created.
COMMIT;ALTER TABLE table_name DROP CONSTRAINTS constraint_name;SQL>DROP INDEX RT_IDX_4326;
Index 'RT_IDX_4326' dropped.
SQL>ALTER TABLE GIS DROP CONSTRAINTS SRID4326;
Table 'GIS' altered.
SQL>UPDATE GIS SET GEOM=ST_SETSRID(GEOM,0);
10 rows updated.
SQL>ALTER TABLE GIS ADD CONSTRAINTS SRID4326 CHECK(ST_SRID(GEOM)=0);
Table 'GIS' altered.
SQL>CREATE INDEX RT_IDX_GIS ON GIS(GEOM) RTREE;
Index 'RT_IDX_GIS' created.
COMMIT;DROP INDEX index_namepublic class MyListener implements RowSetListener
{
public void cursorMoved(RowSetEvent event) {
// do work
}
public void rowChanged(RowSetEvent event) {
// do work
}
public void rowSetChanged(RowSetEvent event) {
// do work
}
}MyListener mListener = new MyListener();
rowset.addRowSetListener(mListener);TbCachedRowSet rowset = new TbCachedRowSet();rowset.setUrl("jdbc:tibero:thin:@localhost:8629:dbsvr");
rowset.setUsername("tibero");
rowset.setPassword("tmax");
rowset.setCommand("SELECT * FROM emp");
rowset.execute();TbCachedRowSet rowset = new TbCachedRowSet();ResultSet rset = pstmt.executeQuery();
rowset.populate(rset);rowset.beforeFirst();
while (rowset.next()) {
System.out.println(rowset.getString(1));
}
rowset.afterLast();
while (rowset.previous()) {
System.out.println(rowset.getString(1));
}rowset.absolute(5);
rowset.moveToInsertRow();
rowset.updateString(1, "tibero");
rowset.insertRow();
rowset.acceptChanges();The minimum value that can be specified as INMEMORY_SIZE is 100 MB, and the value cannot be dynamically changed while the database is running. In a TAC environment, the INMEMORY_SIZE parameter must be set to an equal value for all TAC instances.
V$IM_COLUMN_LEVEL presents the selective column compression levels and indicates whether a column is set as No Inmemory. There is no GV$ view for V$IM_COLUMN_LEVEL in TAC, since all TAC nodes have the same information.
Option
Description
OWNER
Name of the table owner
OBJ_NUM
Table object ID
TABLE_NAME
Table name
SEGMENT_COLUMN_ID
Segment Column Number
COLUMN_NAME
Column name
INMEMORY_COMPRESSION
In-Memory compression option specified for a column, including No Inmemory setting
V$IM_SEGMENTS displays information about all segments existing in the In-Memory Area. You can find any information about populated segments through this view.
Option
Description
OWNER
Name of Segment Owner
SEGMENT_NAME
Name of Segment
PARTITION_NAME
Object partition name Set to NULL for non-partitioned objects
SEGMENT_TYPE
Type of Segment
Table
Table Partition
Table Subpartition
TABLESPACE_NAME
Name of the tablespace containing the segment
INMEMORY_SIZE
Size of the in-memory version of the segment, in bytes
You don't need to create a vendor-specific program to connect to a specific DBMS.
A Java-written program provides universal access to platforms.
Applications written in Java operate in all environments.
Java functionality is scalable.
With JDBC you can use information obtained from a remote database to create an applet, or connect to one or more internal databases. JDBC also allows you to access information stored in a different storage, and reduces the amount of time to develop a new application.
Tibero follows JDBC standards, which will be described in the following chapter, and provides an additional API. This API is referred to as tbJDBC(Java Database Connectivity of Tibero).
To describe the main characteristics, tbJDBC:
Consists of classes and interface methods.
Supports the SQL standard, SQL-99.
Can develop programs independently of the DBMS type.
Inherits Java packages such as java.sql.* and javax.sql.*.
The version of the JDBC depends on the specification of the Java standard.
– JDBC 3.0 (J2SE 1.4) – JDBC 4.0 (Java SE 6)
Using JDBC standards, application developers can create and distribute JDBC drivers.
JDBC provides the following 4 types of drivers.
JDBC-ODBC Bridge Driver
Uses ODBC (Open Database Connectivity) without directly connecting to the database through JDBC.
Native-API Driver
Converts a JDBC command to a compatible command for a DBMS's own client program.
Net-Protocol Driver
Converts JDBC to a platform-independent protocol (for WAS), and then connects to WAS.
Native-Protocol Driver
Converts a JDBC statement to the DBMS's own protocol. Supported by DBMS vendor.
tbJDBC provides the JDBC driver to develop Java applications without installing a database server. The driver is the Native-Protocol Driver (or Thin Driver) type that is introduced in the previous table.
tbJDBC operates as follows:
[Figure 1] tbJDBC architecture
After installing Tibero on the server, tbJDBC is created in the $TB_HOME/client/lib/jar directory. Its file name varies depending on the JDK version.
1.4 or higher
tibero7-jdbc-14.jar: tbJDBC file
tibero7-jdbc-14-dbg.jar: tbJDBC debugging file
6 or higher
tibero7-jdbc.jar: tbJDBC file
tibero7-jdbc-dbg.jar: tbJDBC debugging file
tbJDBC has the following restrictions:
When installing a database server, use the automatically created tbJDBC.
No backward compatibility is provided.
JDK 1.4 or a higher version must be installed.
If the system does not use Oracle's JDK, install another JDK that is compatible with the system. For example, for HP-UX, download JDK from HP, and for AIX, download JDK from IBM.
This chapter describes how to configure HugePage for each operating system.
This section describes how to configure HugePage in Linux. Root permission is required.
Warning
Linux supports Transparent Huge Pages (THP), a capability to automatically increase the memory size according to the memory usage patterns in the OS.
THP allows not only shared memory but also process memory to run as HugePage, and this may affect the system performance. Therefore, it is recommended to disable the THP capability by setting the relevant kernel parameter to 'never'.
The following describes how to enable HugePage.
Check the size of HugePage supported in the current OS.
Check the user group ID that runs Tibero.
Apply the groups and number that will allocate HugePage in "/etc/sysctl.conf"
The following is an example of applying the kernel parameter when TOTAL_SHM_SIZE is set to 1024 MB and the size of HugePage supported by the current OS is 2 MB (2048 KB).
Apply the maximum locked memory value in "/etc/security/limits.conf".
The following is an example of configuring the memlock value.
Restart the operating system.
The modified HugePage value has been applied.
Configure the Tibero initialization parameters in the configuration file (.tip).
The following is an example of configuring the initialization parameters.
Restart the Tibero server.
To disable HugePage, restore the values that were modified to enable HugePage to their original values. The modified values can be restored by the same process and order used to enable HugePage.
AIX uses Large Page instead of HugePage. The benefits of using Large Page are similar to those of HugePage.
The following describes how to enable HugePage.
1. Change the Large Page configuration value of the OS.
AIX internally maintains physical memory pool sizes of 4 KB and 16 MB. The size of this pool can be changed to 16 MB by using the vmo command. The remaining space is automatically allocated to the 4 KB pool. From AIX 5.3 on, Large Page pools are dynamically maintained, so the system does not need to be restarted after changing the size.
First, v_pinshm must be separately configured so that the space in which the shared memory is allocated is not swapped to disk. percent_of_real_memory is the amount of memory that TSM possesses as a percentage of the total memory.
Configure the Large Page pool size. num_of_lage_pages is an integer and is equal to TSM / 16 MB.
2. Configure user permissions.
According to the security policy, all users (except the root user) must have the CAP_BYPASS_RAC_VMM permission to use Large Page. The permission can be configured by using the chuser command.
3. Configure the following Tibero initialization parameters in the environment configuration file (.tip).
The following example configures the initialization parameters.
4. Restart the Tibero server.
To disable Large Page, restore the values that were modified to enable Large Page to their original values.
The functionality of HugePage can be applied by using the ISM (Intimate Shared Memory) function through Large Page.
The advantages of using ISM are as follows:
ISM shared memory is automatically locked by the OS when it is created. This ensures that the memory segment is not swapped to disk. It also allows the kernel to use a fast locking mechanism for I/O of the shared memory segment.
The memory structure used to convert the kernel's virtual memory address into a physical memory address is shared between multiple processes. This sharing reduces CPU consumption and conserves kernel memory space.
Large Page supported in the system's MMU (Memory Management Unit) is automatically allocated in the ISM memory segment. This allocation improves system performance by conserving memory space for managing pages and by simplifying virtual memory address conversion.
To enable HugePage, configure the server initialization parameter as follows.
The parameter is only valid in Solaris and the default value is Y.
If this function is turned on, shmget is used to create shared memory and the SHM_SHARED_MMU option is added when attaching with shmat. This function is only applied in the server process and the listener process. The client module that attaches the server's TSM does not use this option. However, if the parameter is not set to Y, change the value to Y and restart the Tibero server.
Set the server initialization parameter _USE_ISM to N and then restart the Tibero server.
Tibero does not restrict the size of HugePage, which can be set within the range allowed by different operating systems.
The recommended settings are as follows:
For small memory (less than 64GB): 4MB
For large memory (more than 64GB): 16MB
This chapter describes how to install Tibero a TAC(Tibero Active Cluster) configuration.
Verify the following before configuring TAC.
System requirements TFor more information about the system requirements for TAC, refer to "System Requirements" in Installation Overview page.
Installation requirements Verify the basic installation requirements in “” before configuring TAC.
IP address and port information
Socket buffer configuration
Shared disk type
Before starting installation, check the external IP address and listener port number which are needed by an instance of Tibero.
In addition, the interconnect IP address, interconnect port number, and the CM port number are also required. When using VIP or IP filtering, make sure that IPs are assigned to the same subnet.
Set the socket buffer values of the operating system.
TAC requires shared disk space that all instances can access together.
Executing and operating TAC requires at least seven shared files. These files are created based on the parameter {SHARED_DISK_HOME} specified by the user during installation. If necessary,
{TAC_INSTANCE_ID} is attached to the name of the files. For example, if {SHARED_DISK_HOME} is "dev/tac" and {TAC_INSTANCE_ID} is 0, the path of an Undo log file is "dev/tac/UNDO0.dtf."
The following is the list of shared files and their paths required when initially installing nodes.
After the initial installation, additional nodes installation requires the following four shared files for each node. For information on each file's path, see the previous table.
3 redo log files
1 undo log file
You can verify whether the installation is successful by running the tbcm command.
The command shows the CM configuration information.
There are two methods for uninstalling a node in the TAC environment: console mode and manual mode.
The following is the process of removing a node in console mode.
Shut down the Tibero instance.
End TBCM.
Delete the Tibero installation directory as well as all the subdirectories by using the OS's command.
The following is the process of removing a node manually.
Shut down the Tibero instance.
End TBCM.
Delete the Tibero installation directory as well as all the subdirectories.
This chapter describes how to enable or disable IMCS, and set individual object for population into IMCS.
IMCS is enabled and disabled by specifying the value of the INMEMORY_SIZE parameter.
The default value of the INMEMORY_SIZE parameter is 0, which means IMCS is disabled. To enable IMCS, you need to set the INMEMORY_SIZE parameter to at least 100 MB before starting the database. If the INMEMORY_SIZE parameter is set to a value that is smaller than 100 MB, it defaults to 0.
The INMEMORY_SIZE parameter cannot be dynamically modified. To modify the parameter, you need to restart the database.
Modify the INMEMORY_SIZE parameter value. Change the value of INMEMORY_SIZE in the configuration file (tip file).
Shut down the database.
Restart the database.
To enable IMCS, you need to restart the database.
Set the INMEMORY_SIZE parameter. Before starting the database, set INMEMORY_SIZE to at least 100M in the configuration file (tip file).
If the database is running, shut it down.
Restart the database.
To disable IMCS, you need to restart the database.
Set the INMEMORY_SIZE parameter. Before starting the database, set INMEMORY_SIZE to 0 in the configuration file (tip file) or delete the INMEMORY_SIZE parameter.
Shut down the database.
Restart the database.
This section describes how to enable or disable individual objects for population into IMCS with priority and compression options.
In-Memory Population (Population) is a separate step that occurs when the database reads existing row format data from disk and converts it into columnar format, and then loads it into IMCS. Only tables, partitions and subpartitions can be populated in IMCS.
Based on the priority option, population is enabled when starting up the database or when accessing an In-Memory object.
Priority of population can be controlled by DDL statements that include an INMEMORY PRIORITY clause.
Population priority can be set for tables, partitions and subpartitions. Columns are not eligible for priority setting. Note that setting the In-Memory option for objects is just a pre-population task, and that it does not enable population by itself.
CREATE TABLE statement
ALTER TABLE statement
Using an INMEMORY clause in DDL statements, you can set the INMEMORY option for tablespaces, tables, partitions and subpartitions.
Basically, the INMEMORY clause can be specified only at the segment level. If it is specified at the column level, the range of available options will be limited. The column-level INMEMORY clause will be discussed later.
To set the INMEMORY option, specify an INMEMORY clause in the following statements.
CREATE TABLESPACE or ALTER TABLESPACE If the INMEMORY option is specified on a tablespace, the option will apply to all tables created in the tablespace. If the INMEMORY option is specified on a table, it will be overridden by the tablespace. The INMEMORY option specified for the tablespace has control over its new tables only. Therefore, even if you use the INMEMORY option through the ALTER TABLESPACE statement, the option does not apply to already existing tables. Likewise, If you change the option to NO INMEMORY through ALTER TABLESPACE, existing tables that are already set to INMEMORY cannot switch to NO INMEMORY.
CREATE TABLE or ALTER TABLE If you use the INMEMORY option for a table, non-virtual columns become eligible for population by default. You can selectively make specific columns eligible for population by using the column-level INMEMORY clause. For partitioned tables, you can specify the INMEMORY option for individual partitions. If the INMEMORY option is not specified, the option is inherited from the table level. If the INMEMORY option is specified, it will be overridden at the table level.
CREATE TABLE statement
ALTER TABLE statement
You can make a specific column ineligible for In-Memory population by using the column-level INMEMORY clause.
CREATE TABLE statement
ALTER TABLE statement
The INMEMORY clause at the tablespace level must be preceded by the DEFAULT clause.
CREATE TABLESPACE statement
ALTER TABLESPACE statement
This chapter describes the basic concepts and functions of Tibero Active Storage.
TAS manages disk devices without an external solution. TAS acts as a file system and a logical volume manager that stores the data files and log files required for running Tibero. TAS provides a clustering function that can use the Tibero Active Cluster (hereafter TAC) function when using a shared disk.
Tibero Active Storage (TAS) manages disk devices without an external solution. TAS acts as a file system and a logical volume manager that stores the data files and log files required for running Tibero. TAS provides a clustering function that can use the Tibero Active Cluster (TAC) function when using a shared disk.
This chapter describes the C external procedure utility used to create the functions of a user shared library.
The external C procedure provides several APIs which help a user to implement a user shared library. These APIs are referred to as C External Procedure Utility.
The functions included in the external C procedure utility are:
SQLExtProcAllocMemory
SQLExtProcRaiseError
$ echo %TB_HOME% %TB_SID%
$ cd %TB_HOME%\bin
$ tbinstall.vbs %TB_HOME% %TB_SID%$ echo %TB_HOME% %TB_SID%
$ cd %TB_HOME%\config
$ gen_tip.bat\tibero=(
(INSTANCE=(HOST=192.168.xxx.xxx)
(PORT=8629)
(DB_NAME=tibero)
)
)C:\Users\tmax\Desktop\tibero7\config>tbsql tibero/tmax
SQL> SELECT IPADDR, USERNAME FROM V$SESSION;
IPADDR
-----------------------------------------------
USERNAME
---------------------------------------------------------------------------------
192.168.xxx.xxx
TIBERO
1 row selected.$ echo %TB_HOME% %TB_SID%
$ cd %TB_HOME%\bin
$ tbuninstall.vbsCON_ID
The ID of the container to which the data pertains
BYTES
Number of bytes for the segment
BYTES_NOT_POPULATED
Size of the portion of the segment that is not populated In-Memory Area, in bytes
POPULATE_STATUS
Status of segment population
STARTED
COMPLETED
FAILED
INMEMORY_PRIORITY
Priority for segment population
LOW
MEDIUM
HIGH
CRITICAL
NONE
INMEMORY_DISTRIBUTE
In-Memory DISTRIBUTE option specified for the segment
AUTO
BY ROWID RANGE
BY PARTITION
BY SUBPARTITION
INMEMORY_DUPLICATE
In-Memory DUPLICATE option specified for the segment
NO DUPLICATE
DUPLICATE
DUPLICATE ALL
INMEMORY_COMPRESSION
In-Memory compression option specified for the segment
NO MEMCOMPRESS
FOR QUERY LOW
FOR QUERY HIGH
a
did
in
only
then
where
all
do
into
onto
there
whether
almost
does
is
or
therefore
which
also
either
it
our
these
while
although
for
its
ours
they
who
an
from
just
s
this
whose
and
had
ll
shall
those
why
any
has
me
she
though
will
are
have
might
should
through
with
as
having
Mr
since
thus
would
at
he
Mrs
so
to
yet
be
her
Ms
some
too
you
because
here
my
still
until
your
been
hers
no
such
ve
yours
both
him
non
t
very
but
his
nor
than
was
by
how
not
that
we
can
however
of
the
were
could
i
on
their
what
d
if
one
them
when

vm.nr_hugepages
Number of HugePages.
Size of [TOTAL_SHM_SIZE / HugePage size].
vm.hugetlb_shm_group
Group ID that will allocate HugePage.
User group ID that runs Tibero.
memlock
HugePage size * number of HugePages
TOTAL_SHM_SIZE
HugePage size * HugePage count
USE_HUGE_PAGE
Y
TOTAL_SHM_SIZE
1 GB (Default value)
USE_HUGE_PAGE
Y
{SHARED_DISK_HOME}/temp001.dtf
External IP address
IP address for external communication.
Interconnect IP address
Internal IP address for communication among nodes.
Listener port number
Port number assigned when each node's Tibero instance starts.
CM port number
Port number assigned to CM for communication among nodes.
Interconnect port number
Port number for communication among each node's Tibero instances.
SB_MAX
4194304
TCP_RECVSPACE
4194304
TCP_SENDSPACE
1048576
net.core.rmem_max
The maximum value among 67108864, the current OS setting, TCP_RCVBUF_SIZE and _INC_TCP_RCVBUF_SIZE
net.core.wmem_max
The maximum value among 67108864, the current OS setting, TCP_RCVBUF_SIZE and _INC_TCP_RCVBUF_SIZE
max_buf
4194304
recv_buf
4194304
send_buf
1048576
Control file
{SHARED_DISK_HOME}/c1.ctl (changeable)
Cluster file for CM
{SHARED_DISK_HOME}/cfile/cls1_cfile(changeable)
Redo log file
{SHARED_DISK_HOME}/log{TAC_INSTANCE_ID}1.log
{SHARED_DISK_HOME}/log{TAC_INSTANCE_ID}2.log
{SHARED_DISK_HOME}/log{TAC_INSTANCE_ID}3.log
Undo log file
{SHARED_DISK_HOME}/UNDO{TAC_INSTANCE_ID}.dtf
User tablespace file
{SHARED_DISK_HOME}/usr001.dtf
System tablespace file
{SHARED_DISK_HOME}/system001.dtf
Temporary tablespace file
Behaviour
Description
On-demand population
The default value of the INMEMORY PRIORITY option is NONE. In this case, population is enabled only when accessing the object through an In-Memory scan. Population is not possible if the object is not accessed, or accessed through an index scan or full table scan.
Priority-based population
If the INMEMORY PRIORITY option is set to a value other than NONE, population is enabled without access to the object. Population starts with the highest priority level. With the same priority level, the order is not assured. When there is no more space for IMCS, population stops.
Options
Description
PRIORITY NONE
Population occurs only when accessing objects.
PRIORITY LOW
Population occurs whether or not objects are accessed. Objects with this level are populated after others with the following priority levels are completed.
MEDIUM
HIGH
CRITICAL
PRIORITY MEDIUM
Population occurs whether or not objects are accessed. Objects with this level are populated after others with the following priority levels are completed.
HIGH
CRITICAL
PRIORITY HIGH
Population occurs whether or not objects are accessed. Objects with this level are populated after others with the following priority levels are completed.
CRITICAL
PRIORITY CRITICAL
Population occurs whether or not objects are accessed. Objects with this level are populated first among all.
UTF16
16-bit International Standard Multilingual
Korean
EUCKR
EUC 16-bit Korean
MSWIN949
MS Windows Code Page 949 Korean
English
ASCII
ASCII 7-bit English
Japanese
SJIS
SHIFT-JIS 16-bit Japanese
SJISTILDE
SHIFT-JIS 16-bit Japanese Including Fullwidth
Tilde
JA16SJIS
MS Windows Code Page 932 Japanese
JA16SJISTILDE
MS Windows Code Page 932 Japanese Including
Fullwidth Tilde
JA16EUC
EUC 24-bit Japanese
JA16EUCTILDE
EUC 24-bit Japanese
Chinese
GBK
MS Windows Code Page 936 Chinese
GB18030
Chinese National Standard GB18030-2000
Chinese, Hong Kong, Taiwanese
ZHT16HKSCS
HKSCS2001 Hong Kong
ZHT16BIG5
BIG5 Code Page Chinese
ZHT16MSWIN950
MS Windows Code Page 950 Chinese
EUCTW
EUC 32-bit Traditional Chinese
Vietnamese
VN8VN3
VN3 8-bit Vietnamese
Thai
TH8TISASCII
Thai Industrial Standard 620-2533 - ASCII 8-bit
Eastern European
EE8ISO8859P2
ISO8859-2 Eastern European
Western European
WE8MSWIN1252
MS Windows Code Page 1252 Western European
WE8ISO8859P1
ISO8859-1 Western European
WE8ISO8859P9
ISO8859-9 Western European (Turkish)
WE8ISO8859P15
ISO8859-15 Western European
Russian, Bulgarian
CL8MSWIN1251
MS Windows Code Page 1251 Cyrillic Script
CL8KOI8R
KOI8-R Cyrillic Script
CL8ISO8859P5
ISO8859-5 Cyrillic Script
RU8PC866
IBM-PC Code Page 866 8-bit Cyrillic Script
Greek
EL8ISO8859P7
ISO8859-7 Greek
EL8MSWIN1253
MS Windows Code Page 1253 8-bit Greek
Arabic
AR8ISO8859P6
ISO8859-6 Arabic
AR8MSWIN1256
MS Windows Code Page 1256 8-bit Arabic
Hebrew
IW8ISO8859P8
ISO8859-8 Hebrew
The following example checks the available character sets for installing the Tibero binary.
Universal
UTF8
8-bit International Standard Multilingual (Default
value)
$ grep Hugepagesize /proc/meminfo
Hugepagesize: 2048 KB$ id -a
uid=579(tibero) gid=579(tibero) groups=579(tibero)$ cat /etc/sysctl.conf
......Omitted......
vm.nr_hugepages=512
vm.hugetlb_shm_group=579$ cat /etc/security/limits.conf
......Omitted......
tibero soft memlock 1048576
tibero hard memlock 1048576$ egrep -e HugePages_Total /proc/meminfo
HugePages_Total: 512 KB$ cat tibero.tip
......Omitted......
TOTAL_SHM_SIZE=1G
USE_HUGE_PAGE=Y$ tbdown
Tibero instance terminated (NORMAL mode).
$ tbboot
Listener port = 8629
Change core dump dir to /home/tibero/tibero7/instance/tibero.
Tibero 7
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).$ vmo r o v_pinshm=1
$ vmo r o maxpin%=percent_of_real_memory$ vmo p o lgpg_regions=num_of_lage_pages o lgpg_size=16 MB$ chuser capabilities=CAP_NUMA_ATTACH,CAP_BYPASS_RAC_VMM,CAP_PROPAGATE <user id>$ cat tibero.tip
......Omitted......
TOTAL_SHM_SIZE=1G
USE_HUGE_PAGE=Y$ tbdown
Tibero instance terminated (NORMAL mode).
$ tbboot
Listener port = 8629
Change core dump dir to /home/tibero/tibero7/instance/tibero.
Tibero 7
TmaxData Corporation Copyright (c) 2008-. All rights reserved.
Tibero instance started up (NORMAL mode).USE_ISM=Y_USE_ISM=Ntbcm -sCM information
===============================================================
CM NAME : cm0
CM UI PORT : 8635
RESOURCE FILE PATH : /home/tibero7/cm0_res.crf
CM MODE : GUARD ON, FENCE OFF
LOG LEVEL : 2
===============================================================
tbdowntbcm -d$ rm –rf $TB_HOMEtbdowntbcm -drm –rf $TB_HOMEINMEMORY_SIZE = 500MINMEMORY_SIZE = 100MINMEMORY_SIZE = 0#INMEMORY_SIZE = 500MCREATE TABLE inmemory_test_priority
( id NUMBER(5) PRIMARY KEY,
test_column VARCHAR2(15))
INMEMORY PRIORITY HIGH;ALTER TABLE inmemory_test_priority INMEMORY PRIORITY HIGH;CREATE TABLE inmemory_test (
id NUMBER(5) PRIMARY KEY,
test_column VARCHAR2(15))
INMEMORY;ALTER TABLE inmemory_test INMEMORY;CREATE TABLE inmemory_test (
id NUMBER(5) PRIMARY KEY,
test_column VARCHAR2(15),
no_inmemory_column VARCHAR2(20))
INMEMORY
NO INMEMORY (no_inmemory_column);ALTER TABLE inmemory_test NO INMEMORY (no_inmemory_column);CREATE TABLESPACE inmemory_ts
DATAFILE 'imts01.dbf' SIZE 40M
ONLINE
DEFAULT INMEMORY;LTER TABLESPACE inmemory_ts DEFAULT INMEMORY;$ tbboot -C
Available character set list
ASCII
CL8ISO8859P5
CL8KOI8R
CL8MSWIN1251
EE8ISO8859P2
EUCKR
......Omitted......
Available nls_date_lang set list
AMERICAN
BRAZILIAN PORTUGUESE
JAPANESE
KOREAN
......Omitted......Note
For more information about TAC, refer to "Tibero Administrator's Guide".
TAS manages multiple disks in a disk space. A disk space is similar to a file system on top of a logical volume, and Tibero stores files in such a disk space. Disks can be added or removed when using a disk space to run Tibero.
When adding or removing a disk, TAS automatically redistributes the stored data so that all disks in a disk space can be utilized evenly and no data is lost.
TAS provides a mirroring function for data in Tibero. TAS supports two mirroring methods: the two-way method (NORMAL option) and the three-way method (HIGH option). If a separate mirroring solution is used, then the TAS mirroring function can be avoided (EXTERNAL option).
This section describes the core concepts required for understanding TAS.
TAS Instance
TAS Disk Space
Mirroring and Failure Groups
TAS Disk
TAS File
TAS is based on Tibero. The running processes of a TAS instance are almost identical to those of Tibero's. However, TAS runs fewer tasks than a database and thus consumes fewer system resources.
TAS stores and manages metadata, which records disk space and file information, in the first disk space. TAS metadata stores the following information.
Disks in a disk space.
Space available in a disk space.
File names in a disk space.
File extent information.
A log for TAS metadata.
The following diagram shows the relationship of a TAS instance with Tibero and a disk.
Tibero does not read and write a file by using TAS. Instead, it directly reads and writes to the disk by using the file's metadata, which is retrieved from TAS.
TAS provides a clustering function for using a shared disk. Only one Tibero instance can use a disk space. In order for multiple Tibero instances to use a disk space, the instances must be configured in TAC.
The following describes the configuration of Tibero and TAS using the cluster function.
A disk space is composed of multiple disks. Each disk space stores metadata (needed for managing the disk space) and the files used in the database.
TAS provides a mirroring function for data stored in a disk space. The function copies data and stores it in multiple disks and then protects it. To use the mirroring function, the following redundancy levels are available when creating a disk space.
NORMAL
2-way mirroring
HIGH
3-way mirroring
EXTERNAL
No mirroring
When TAS allocates an extent for a mirrored file, multiple extents (2 or 3) are allocated according to the file's redundancy level. The disk of each allocated extent is stored in a separate failure group. As a result, if an error occurs in a disk included in a failure group or in an entire failure group, no data is lost and continuous service for the disk space can be provided.
A failure group is defined when creating a disk space, and the redundancy level, which is set when a disk space is created, cannot be later modified. If a failure group is not defined, then each disk is automatically defined as a failure group. A disk space with redundancy level of NORMAL or HIGH, must define 2 or 3 failure groups. A disk space with a redundancy level of EXTERNAL does not use failure groups. To efficiently use disk space, set similar disk space for each failure group.
TAS can use the following disk devices as a TAS disk.
Entire disk
Disk partition
Logical volume
The privilege issue for disks or physical volumes can be resolved by creating a device node with udev.
For disks partitioned into LUNs from scratch in SAN storage, you can configure a TAS cluster without clustering software and GFS2.
When configuring a TAS cluster, a disk set with a different name on each node can be used as a TAS disk. For example, a disk without CLVM can be seen as '/dev/hdisk1' on node 1 and '/dev/hdisk2' on node 2, and a database can be configured in a such environment as long as TAS is configured. Since disk string is used to search for a TAS disk, the 'disk/raw device' must be set so that it is visible to all the nodes in the cluster.
TAS distributes file information evenly across all disks in a disk space. This allocation method ensures equal usage of disk space and allows all disks in a disk space to have the same I/O load. TAS balances the load among all disks in a disk space. Therefore, different TAS disks must not share the same physical disk device.
When configuring a logical volume with RAID, it is recommended to set the striping size to a multiple of the AU size of TAS. This helps increase performance by aligning the disk's striping size with that of TAS.
Each disk space is divided into allocation units. An allocation unit is the basic unit of allocation in a disk space. A file extent uses one or more allocation units, and a file uses one or more extents. The allocation unit size can be set in the AU_SIZE property when creating a disk space.
The configurable values are 1, 2, 4, 8, 16, 32, and 64MB.
A file stored in a TAS disk space is called a TAS file. Tibero requests the file information from TAS, similar to the way Tibero uses files in a file system.
The name of a TAS file is in the "+{disk space name}/..."format. It is not included in the file information and is managed as an alias.
TAS files are stored in a disk space as a set of extents. Each extent is stored in each disk of a disk space, and each extent uses one or more allocation units. To support large-sized files, TAS uses variable size extents, which reduces the memory required for maintaining file extent information.
The extent size automatically increases when the file size increases. If the file size is small, the disk space's allocation unit (hereafter AU) matches the extent size. When the file size increases, the extent size increases to 4AU or 16AU.
The extent size varies as follows:
Under 20,000
1AU
20,000 ~ 40,000
4AU
Over 40,000
16AU
The following shows the relationship between extents and allocation units.
Using TAS requires to execute the Active Storage Service Daemon (ASSD) process.
The ASSD process allows communication between TAC and TAS, and it has the following threads.
COMM
Processes messages that ASSD control thread receives. It sends a message of requesting information in a database, or processes a request and returns
the result in TAS.
DMON
Continuously checks whether a disk is accessible. If it fails more than specific times, it is treated as FAIL. If the access becomes possible, this thread processes resynchronization. This thread executes in a single node among TAS nodes. The node is called TAS master node. If the mast node fails,
another node becomes a master node.
RBAL
Creates tasks required for rebalancing and sends them to the RBSL thread.
RBSL
Receives tasks required for rebalancing from the RBAL thread and processes
them.
Using these functions requires understanding of the ExtProcContext structure provided by Tibero. To create a user shared library and register it as a PMS, particular actions are needed as explained in this chapter.
To use the external C procedure utility, it must be defined in the same way as in both the tbEPA process and the user shared library.
To specify this information in the tbEPA process, the WITH CONTEXT clause must be defined when the user shared library is registered as PSM. The first parameter of the user shared library function must be always ExtProcContext *. The ExtProcContext is defined in the extproc.h file, and used by the tbEPA process to handle the functions of the External C Procedure utility.
To allow users to create the functions of a user shared library using the external C procedure utility, Tibero provides the extproc.h file, located in the $TB_HOME/client/include directory. The functions use the file to access the external C procedure utility. To write a user shared library function which calls any function from the external C procedure utility, the library must contain the extproc.h file.
The following is an example.
In the “Function and PSM Object Mapping”, users can see the [WITH CONTEXT] clause. This informs DBMS that the client application is currently accessing the C external procedure utility. To execute the PSM, the service routine sends the information to the tbEPA process.
Then, the tbEPA process loads the library and sends the pointer of ExtProcContext as the first parameter to the client application.
The first parameter of the PARAMETERS clause must be always CONTEXT.
When WITH CONTEXT is specified alone without PARAMETERS, users must be aware that the parameter number 1 is CONTEXT, and the parameter 2 and the subsequent parameters are the PSM parameters.
This chapter describes the functions provided in the C external procedure utility.
In a user shared library, memory is allocated dynamically. Generally, the malloc and calloc functions are used to dynamically allocate memory at runtime, but if these functions are used as return values, users may have difficulty in deciding when to free the memory.
To address this problem, the external C procedure utility provides the SQLExtProcAllocMemory function. With this function, memory is dynamically allocated and does not need to be freed. The allocated memory is automatically freed after the function is executed.
The following is the syntax of the SQLExtProcAllocMemory function.
Parameter
Description
ExtProcContext *
Pointer of the ExtProcContext structure.
size
Size of the memory to be allocated.
Success or Failure
Description
Success
Returns a pointer to the beginning of the allocated memory.
Failure
Returns a NULL value.
Users may encounter an error or a necessity to issue an error while executing functions for a user shared library. In these cases, they can implement error handling process in the PSM using its return value. However, the error handling process is necessary not only for the user shared library functions but also for all the locations in which their corresponding PMSs are used.
To address this problem, the C external procedure utility provides a function of the shared library which directly raises a DBMS error using SQLExtProcRaiseError. The errors thrown by the function of the shared library include DIVIDE BY ZERO, NUMBER EXCEEDS PRECISION, etc. This function is also available in "Callback Service".
The following is the syntax of the SQLExtProcRaiseError function.
Parameter
Description
ExtProcContext *
Pointer to the ExtProcContext structure.
errcode
Number of the error code.
(For more information about the error code, refer to "".)
Like the SQLExtProcRaiseError function, this function raises an error. This function can also raise a user-defined error.
The following is the syntax of the SQLExtProcRaiseErrorWithMsg function.
Parameter
Description
ExtProcContext *
Pointer to the ExtProcContext structure.
errcode
User-defined error code (20000 ~ 20999).
*errmsg
User-defined error message.
This chapter describes how to perform pre-installation tasks such as verifying hard disk space, installing JDK, and setting kernel parameters for each operating system.
Before installing Tibero, perform the following tasks.
Verify available disk
Install JDK
Install OS-specific packages
Set OS-specific kernel parameters (shell limits parameters)
Set NTP server
Verify host name, port number, system account, and localhost
Although it slightly depends on which platform you use, installing Tibero requires at least 2 GB of free HW space. This is the minimum space needed to create the database after installation.
To install Tibero, JDK 1.5.17 or a later version must be installed.
If your system does not use Oracle's JDK, find a compatible JDK for your system. (For example, you can download JDK from IBM for AIX.)
You can find instructions to install JDK for different systems at the following link.
🔎
This section describes the packages required for each operating system.
Packages of the following versions or higher must be installed in each package, and the package name or version may differ for each OS and version.
For RHEL 8.1 or higher versions, the following packages must be installed.
The following describes how to set kernel and shell limits parameters for each operating system.
After setting all required parameters, reboot the system.
Kernel and shell limits parameters for Linux are as follows:
Configuration file
Setting values
Configuration file
Setting values
For RHEL 7.2 or higher versions, the following kernel parameter settings are required.
Configuration file
Setting values
Kernel and shell limits parameters for Solaris are as follows: