Open topic with navigation
SQL system properties are key/value settings that are visible to every Transaction Engine (TE) or Storage Manager (SM) process and client connection in the database. They are stored persistently in the
SYSTEM.PROPERTIES table (see PROPERTIES System Table Description) and are modifiable by the
SET SYSTEM PROPERTY command (see SET).
Note: Only a user that has been granted the system
ADMINISTRATOR role can modify system property values.
The default is
false. When set to
true, all DDL statements automatically commit the user transaction before and after the statement being executed.
Note: This provides similar behavior with how Oracle handles DDL within transactions.
This is set to
falseby default and typically does not need modified. If set to
true, it defaults to an older cost model mechanism for the SQL query optimizer.
Previously, the NuoDB cost model synthetically added 1000 rows to all small tables for the purposes of computing access costs. As more optimizations have been added this synthetic addition has been found to complicate optimization in certain cases (for instance, joins between large and small tables and subquery optimization). We have therefore changed the cost model such that an additional 10 rows are added to all tables, rather than 1000 rows to only small tables. This changes selectivity estimates slightly, which may mean slightly different plans. If you find a query performs worse because the plan has changed, you can recover the previous behavior by setting
DB_TRACE, DB_TRACE_TABLE, DB_TRACE_PATTERN, DB_TRACE_MIN_TIME, DB_TRACE_PROCEDURES
These system properties are used to define SQL tracing at the
GLOBAL level. These system properties cannot be modified by the
SET command. See Using the SQL Trace Facility for more information on setting these system properties.
This is an integer multiplier of the
--mem memory value that is used to set the memory limit (per connection) for all blocking SQL engine operations - hash-grouping, sorting, sorting via priority queue for limit, distinct (via hash-grouping), union, listagg, table functions, and stored
procedure returning result set that accumulate data - in main memory. The default value is
0 and is backwards compatible, that is, no checking is performed whether any of the afore mentioned operations exceed a memory limit. The maximum value that may be set is either
10 or the value of the hard memory limit (whichever is smaller). When specifying a value for
DEFAULT_CONNECTION_MEMORY_LIMIT, consider also the value set for the
--mem connection property. For example, if
--mem is set to 3GB, and
DEFAULT_CONNECTION_MEMORY_LIMIT's multiplying factor is set to 4, then memory-intensive operations can use up to 12GB memory per connection.
Note: When the memory limit is exceeded, any query currently being executed is aborted and an error is generated.
This property is used to define a memory threshold that limits the memory consumption of any SQL operation. If this threshold is crossed, spill to disk functionality is triggered if it is available and supported by the operation. The default value is
64MB, increasing this value will make spill to disk less likely, but can increase the memory usage of the TE.
Note: The QUERYBUFFERSTATS table lists compatible SQL operations that have consumed the most memory and may be useful for tuning this value for a particular workload. For more information, see QUERYBUFFERSTATS System Table Description.
The maximum time (in seconds) which indicates how long idle connections are left open. By default,
IDLE_CONNECTION_TIMEOUT is set to
0, meaning that this feature is disabled. To specify a timeout value for this connection, use any value greater than
This is a global setting for timing out idle connections and may be overridden, on a per connection basis, by using the idle-timeout connection property. For more information on using the
idle-timeout property, see Connection Properties.
Any change made to the
IDLE_CONNECTION_TIMEOUT value affects all active connections that don't override the global setting
Any time a client connection is terminated for being idle for too long, a message will be logged under the net category.
IDLE_CONNECTION_TIMEOUT is set using a value greater than
0, ensure that the
testWhileIdle DataSource property s enabled (set to
true). If the
testWhileIdle property is not enabled, and the connection is terminated because of
IDLE_CONNECTION_TIMEOUT, the DataSource does not automatically replace the terminated connection in the connection pool. For information on using DataSource properties, see DataSource versus Driver Connections.
For NuoDB internal testing only.
The maximum number of open result sets per connection. The default is
The maximum number of open SQL statements per connection. The default is
A new materialization strategy is used to avoid repeated computations of the same subquery multiple times. A subquery will materialize its results in an in-memory table to be reused by the outer query. The
MAX_MATERIALIZED_QUERY_SIZE property specifies the maximum number of bytes that a materialized table can use. If the data to be stored is greater than this limit, it will not be cached and the query will be executed every time the query results are needed. The default is
67108864 bytes (64 MB).
MAX_MATERIALIZED_QUERY_SIZE is used as a limit for each query that could be materialized. If your query has two separate
IN SELECT clauses, your total memory consumption can be up to
2*MAX_MATERIALIZED_QUERY_SIZE. If this property is set to
0, no subqueries are materialized.
The maximum number of statements stored in the slow query log. The default is
The maximum number of precompiled statements to be cached in memory. The default is
500. See LOCALSTATEMENTS System Table Description for more information. Setting
0 empties the current cache and disables statement caching.
The minimum length of time (in seconds) it takes a SQL query to execute (from start to finish) before it is reported in the slow query log (see QUERYSTATS System Table Description). The default is
10. If the sql-statement-metrics logging category is specified, or the debug logging level is set, the following information is also written to the log:
2018-04-20T09:48:12.293+0200  [6,2690]:SQL metrics for statement: <original SQL statement> ==> COMPILETIME: 380, EXECTIME: 601468, RUNTIME: 601848, TIMESTAMP: 1524210491691483, NUMPARAM: 0, PARAMS: , USER: CLOUD, SCHEMA: USER, CONNID: 6, EXECID: 0, TRANSID: 2690, EXECUTIONCOUNT: 3, CLIENTINFO: nuosql, CLIENTHOST: 127.0.0.1, CLIENTPID: 3469, INDEXHITS: 0, INDEXFETCHES: 0, EXHAUSTIVEFETCHES: 0, INDEXBATCHFETCHES: 0, EXHAUSTIVEBATCHFETCHES: 0, RECORDSFETCHED: 0, RECORDSRETURNED: 1, INSERTS: 0, UPDATES: 0, DELETIONS: 0, REPLACES: 0, RECORDSSORTED: 0, UPDATECOUNT: 0, EVICTED: 0, LOCKED: 0, REJECTEDINDEXHITS: 0, USEDMEMORYBYTES: 0
For information on logging levels supported, see Using Log Files.
If the sql-statement-explain-plans logging category is specified, or the debug logging level is set, the following information is also written to the log:
2018-04-20T09:48:12.293+0200  [6,2690]:EXPLAIN ANALYZE: <the EXPLAIN plan of the statement>
The minimum length of time (in seconds) it takes a SQL query to execute (from start to finish) before the entry in the log controlled by the MIN_QUERY_TIME property is logged with the
warn logging level (rather than the
debug logging level). The default is
For NuoDB internal testing only.
Used to turn on (
true) or off (
false) the use of statistics by the optimizer. The default is
true. The gathering of statistics happens regardless of this property setting. This property just determines whether or not those statistics are used. We recommend not changing the default value.
This property is used to automatically roll back the transaction chosen as the victim when a deadlock takes place. By default, this is set to
false (default), then for a given update operation, such as
SELECT..FOR UPDATE or
DELETE, in a
READ COMMITTED transaction, if it encounters an uncommitted insert operation then that insert operation is going to be blocked. If this property is set to
true, then the insert operation is not blocked.
This property can be used to disable spill to disk on all TEs in the database. To disable spill to disk functionality on all TEs in the database, set this property to
false. By default, this property is set to
The percentage of change in the number of rows to a table that cause precompiled statements using the table to be re-compiled. The default is 0.10 (10%). Compiled statements using a table having more than 10% of its rows changed will have its cache statement invalidated. This will force a re-compilation the next time the statement is used.
The maximum number of deterministic User-Defined Functions (UDFs) that will be cached in memory. The cache stores any type of UDF, regardless of how it is implemented (Java or SQL). A deterministic UDF is one that is run with the same arguments. When the cache is full, the oldest entry is removed. If that combination of arguments is used again, the UDF code is executed to compute the result. The default is