The following page details some of the new features of Oracle 11g. It is by no means an exhaustive list, but instead lists some of the highlights.
Some of the features detailed below are only available on Oracle Enterprise Edition and furthermore, may require additional and chargeable database options and packs. Make sure licenses are purchased before using them. If there is any doubt, speak to an account manager.
Oracle Validated RPM
The Oracle Validated RPM can be downloaded onto the server and when run will install and configure
This allows for easy preparation of the server in readiness for the Oracle install.
For RedHat it can be downloaded from the following locations depending on versions
The OEL version can be downloaded from Oracle but requires a support ID.
Oracle Local Registry
Installed when using clusterware. If using clusterware in single instance install, then the OLR is stored at
This can be manipulated using the following commands
- ocrcheck -local
- ocrdump -local -stdout
- ocrconfig -local -manualbackup
The -local flag is used for managing a single instance ocr.
A location details file for the ocr can be found in /etc/oracle/ocr.loc
The oracleasm sys module can be used to configure ASM. See the examples below.
[root ~]# oracleasm configure -i
Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets (''). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface : oracle Default group to own the driver interface : dba Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done
[root~]# oracleasm exit
[root~]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm
[root~]# oracleasm status
Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes
[root~]# lsmod |grep oracleasm
oracleasm 46100 1
[root~]# oracleasm createdisk ASMDISK13 /dev/xvdn
Writing disk header: done Instantiating disk: done
[root~]# oracleasm scandisks
Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks...
[root~]# oracleasm listdisks
ASMDISK01 ASMDISK02 ASMDISK03 ASMDISK04 ASMDISK05 ASMDISK06 ASMDISK07 ASMDISK08 ASMDISK09 ASMDISK10 ASMDISK11 ASMDISK12 ASMDISK13
The SYS modules can be found in /sys/modules
Other Oracle ASM SYS modules include
- oracleasm - Manages ASM Disks
- oracleoks - Kernal Services Module
- oracleacfs - ASM Filesystem Module
- oracleadvm - ASM Dynamic Volume Manager (Presents a blocked device to the OS)
The "createdisk" command only needs to be run on one of the nodes of a cluster; whereas the "scandisks" command will allow the other nodes to discover the disks created on that initial node. Thanks to Wikidot user robbinstw for this update
Dynamic Volumes and Asm Clustered Filesystems
It is now possible to create clustered filesystems managed by ASM. (ACFS)
Use asmca to create an ASM CFS disk group and the filesystem.
The dynamic volumes are store in /dev/asm
All standard Sysadmin commands can be used against ACFS mountpoints
mkfs -t acfs /dev/asm/asmvol-216
fsck -t acfs /dev/asm/asmvol-216
mount -t acfs /dev/asm/asmvol-216 /u01/app/oracle
The following commands are platform independent
- acfsutil info
Displays ACFS filesystem features
- acfsutil snapshot
Create an ACFS snapshot
- acfsutil registry
Register an ACFS filesystem with the ACFS mount registry
- acfsutil rmfs
Remove and ACFS filesystem
- acfsutil size
Resize and ACFS filesystem
- acfsutil tune
View or modify the ACFS tunables.
The following script can be used to start the ASM afcs module on boot.
It should be placed in /etc/init.d
#!/bin/sh # chkconfig: 2345 30 21 # description: Load Oracle ACFS drivers at system boot /u01/app/oracle/product/11.2.0/grid/bin/acfsload start -s
Taking snapshots of an ACFS filesystem
/sbin/acfsutil snap create "snapshot_20110919_145036" /u01/app/oracle/acfsmounts/acfs_db1
/sbin/acfsutil snap delete "snapshot_1" /u01/app/oracle/acfsmounts/acfs_db1
Oracle 11.2 Installation
When installing Oracle there is an option to install or not install database options. e.g. Partitioning, OLAP, Data Mining etc. Choosing not to install a database option does not affect what binaries are placed down onto the machine. Instead, it instructs the installer what binaries to link during the link phase of installation.
Metalink note 948061.1 explains how to enable or disable database options within Oracle Enterprise Edition at a later date.
It is possible to upgrades to 11.2 from
- 188.8.131.52 and above.
- 10.1.0.5 and above
- 10.2.0.2 and above
- 184.108.40.206 and above
Grid Infrastructure should be upgraded first.
Online Database Patching
Checking if a patch is online
opatch query -is_online_patch <patch location>
optach query <patch location> -all
Online patches require no down time but will require one extra page of OS memory per database process during installation.
Useful srvctl commands
List the database HAS config info.
srvctl config database -d orcl
Configure HAS Component
srvctl modify database -d orcl -n orcl -p /tmp/spfile.ora -a DATA,FRA
ASM New Features
- Fast Mirror Resync
Used only if ASM redundancy is used and only if the disk is not damaged as part of the failure. If these conditions are met then only the blocks that have changed since the failure will be resync'ed when the disk is made available again.
- ASM Preferred Mirror Read
Only appropriate in a stretch RAC cluster using ASM redundancy. In a stretch RAC configuration each node can be directed to read the extent local to them. For example, rather than reading the primary extent at the other site, the node will read the mirrored extent that is local to the node.
- Automatic Variable Extent Sizes.
This feature is automatic and allows the extent size of the ASM AU to increase as the file size increases.
1 * AU for the first 20,000 extents
4 * AU for extents > 19,999 < 39,999
16 * AU for extents > 40,000
- Variable AU Sizes
It is possible o change the AU size when creating a disk group. The AU size can be 1MB, 2MB, 4MB, 8MB, 16MB, 32MB or 64MB.
- Check Command
Disk check functionality has been simplified. e.g. alter diskgroup data check; will perform all checks. In addition there is a repair and norepair option.
- Diskgroup Restricted Mount Options
Diskgroups can be mounted in restricted mode to allow rebalance operations to occur without locking overhead.
- Force Mount and Dismount on Diskgroups
- ASMCMD Improvements.
ASMCDM has commands to allow metadata backups of the ASM catalog. e.g. **ASMCMD> md_backup -b jfv_backup_file -g data
lsdsk can run in connected and non-connected mode.
- ACL for Diskgroups
It is now possible to setup users, groups and ACLs for diskgroups. This can be used to prevent dbuser1 accessing dbuser2 ASM disks stored within the same ASM instance.
- ASM Intelligent Data Placement
Can be used to direct ASM to place extents on the HOT or COLD sectors of a disk. ASM diskgroups can even be configured to place the extents on the outer (HOT) area of the disk, but the mirrored extents on the inner (COLD) area of the disk.
NOTE - This is only applicable when using attached disks. It is not appropriate for SAN or NAS configurations.
Improved Table Compression
Table compression can in theory reduce space by 50 to 75 percent and improve performance due to reduced block reads. However, advanced compression at a cost of $11,500 per licensable core is required.
SQL Access Advisor
Now has advice for partitioning in addition to indexes and MVs.
Temporary Tablespace Enhancements
Tempfiles for temporary tablespaces can now be shrunk. A new view has been added to support this operation; dba_temp_free_space
Deferred Segment Creation
A new initialization parameter has been created to allow for deferred segment creation; deferred_segment_creation=true
This can be used to delay the segment creation for an object until the first insert. At the point of the first insert, the segment will be created. Useful for applications where only a subset of objects are being used. In this instance significant space can be save by not actual creating the segments, but where the tables still need to exist for the application to work or install.
NOTE - There are some restrictions on the types of objects this works with. For example, this feature does not work with partitioned objects.
Data Loading via External Tables
There is now a preprocessor command when creating external tables and using oracle_loader. This allows pre-processing on external files before loading into oracle.
CREATE TABLE sales_transactions_ext ( PROD_ID NUMBER, CUST_ID NUMBER, TIME_ID DATE, CHANNEL_ID NUMBER, PROMO_ID NUMBER, QUANTITY_SOLD NUMBER, AMOUNT_SOLD NUMBER(10,2), UNIT_COST NUMBER(10,2), UNIT_PRICE NUMBER(10,2)) ORGANIZATION external ( TYPE oracle_loader DEFAULT DIRECTORY data_file_dir ACCESS PARAMETERS (RECORDS DELIMITED BY NEWLINE CHARACTERSET US7ASCII PREPROCESSOR exec_file_dir:'gunzip' BADFILE log_file_dir:'sh_sales.bad_xt' LOGFILE log_file_dir:'sh_sales.log_xt' FIELDS TERMINATED BY "|" LDRTRIM (PROD_ID, CUST_ID, TIME_ID DATE(10) "YYYY-MM-DD", CHANNEL_ID, PROMO_ID, QUANTITY_SOLD, AMOUNT_SOLD, UNIT_COST, UNIT_PRICE) ) LOCATION ('sh_sales.dat.gz')) REJECT LIMIT UNLIMITED;
Degrees of Parallel
The RDBMS will now automatically determine if degrees of parallelism should be used if the initialization parameter parallel_degree_policy is set to auto or limited. The default is manual. i.e. the user must manually set the DOP when creating the table.
There are a number of other initialization parameters that determine our parallelism works
The following features are now part of Oracle RDBMS partitioning
- Reference Partitioning
- System partitioning
- Virtual column partitioning
- Interval partitioning
- Partition Advisor (Part of OEM).
Used to partition two tables that contain a RI constraint.
Allows applications to determine which partition records are placed in.
Virtual Column Partitioning
Virtual columns, for example, total_pay as (sal * 1.15), can now be used as the partitioning key.
Allows the partitioned object to have an interval value which determines when oracle automatically creates the next partition.
Various security enhancements have been introduced. Below is a list of some of these
- Case sensitive passwords. Effective as soon as a password is changed or a new user is created. Can be turned off with sec_case_sensitive_logon
- Improved hashing algorithm on passwords.
- Built-in password complexity checker which can be implemented with @?/rdbms/admin/utlpwdmg.sql
- Auditing is now on by default and all records are stored in $AUD
- Case sensitive passwords are valid for the password file. Can be turned off using the switch ignorecase=Y with the orapwd utility.
Transparent Data Encryption
TDE now includes support for
- Logical Standby Database
- Asynchronous Change Data Capture
- Hardware-based Master Key Protection
Using Tablespace Encryption
1. Create an encryption key using the following statement
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "MyEcryptionKey;
2. Create a tablespace with the encryption keywords
CREATE TABLESPACE encrypt_ts DATAFILE '+DATA' SIZE 100M ENCRYPTION USING '3DES168' DEFAULT STORAGE (ENCRYPT);
There is a requirement to regenerate the key using a database trigger in the event of a database crash??? - Check Metalink Note: 1228046.1
RMAN Security Enhancements
RMAN now has backup shredding. It can be switched ob using
CONFIGURE ENCRYPTION FOR DATABASE ON; DELETE FORCE NOPROMPT BACKUPSET TAG weekly_backup;
Transparent encrypted backups can be turned on with
CONFIGURE ENCRYPTION FOR DATABASE ON;
SET ENCRYPTION ON;
Shredding can be used to destroy the encryption key in an encrypted backup.
Real Applications Testing
Real Application Testing (RAT) consists of
- SQL Performance Analyzer
- Database Reply
SQL Performance Analyzer
SQL Performance Analyzer allows the DBA to
- Capture SQL Workload from Production
- Transport the SQL Workload to a Test System
- Run SQL against the "old/pre-upgraded" test environment
- Compute "before-change" Performance Data
- Run SQL against the "new/upgraded" environment
- Compute "after-change" Performance Data
- Compare "before" and "after" Performance Data
NOTE If using the AWR to capture the SQL workload in production, make sure the parameters for AWR are set to capture enough or all SQL running on the system. By default, AWR only captures the top 100 SQL statements.
The following package is used for SQL Performance Analyzer; DBMS_SQLPA
The following package is used for managing the SQL workloads; dbms_sqltune It has numerous procedures, but in particular, the following procedures are used for managing SQL workloads
- create_stgtab_sqlset - Creates the staging table used to store the SQL Tuning Sets.
- pack_stgtab_sqlset - Used to package up the SQL Tuning Sets within the SQL Tuning Set staging table on the source system.
- unpack_stgtav_sqlsql - Used to unpack up the SQL Tuning Sets into the SQL Tuning Set staging table on the destination system.
The tool is integrated with the SQL Tuning Pack, which allows the tuning of the SQL that are shown to be "bad" after comparing the "before" and "after" performance data. The analysis can then be used to seed the SQL Plan Management baselines.
There are a number of new/amended views to support SPA;
- DBA_ADVISOR_TASKS - Amended
- DBA_ADVISOR_FINDINGS - Amended
- DBA_ADVISOR_EXECUTIONS - New
- DBA_ADVISOR_SQLPLANS - New
- DBA_ADVISOR_SQLSTATS - New
SQL Plan Baseline
SQL Plan Baselines is a new feature that automatically monitors the plans of SQL statements and stores them in a repository. The plans stored in the repository are then verified to see which plan is most performant.
The verification of plans is performed by the SQL Tuning Advisor automatically each night by a new maintenance job. The maintenance job takes the worst performing SQL and runs them for a set period of time to see if a better plan is available for the SQL.
If a better plan is discovered, then the SQL Tuning advisor will, if required, automatically promote the best SQL plan stored to the "live" plan for the SQL.
Automatic SQL Tuning
Oracle 11g will now automatically tune SQL and store the improved profiles and plans as baselines. These baselines can then be accepted by the DBA or automatically applied by setting the relevant initialization parameter.
This is archived using a new maintenance window called auto_task which calls the SQL Tuning Advisor which then spends 1hr running and tunning the worst 150 SQL statements. (The SQL Tuning advisor in conjunction with the optimizer runs in a "special" tuning mode so as not to put undue load on the system. i.e. it does not properly run the worst performing SQL statements).
Baselines can be used to capture key metrics on the system for previously configured windows. This can be a default window. e.g. The default moving window. Or it can be a window created by the DBA. The DBA can then use the metrics capture as part of the baseline window can compare them against previous historical baselines. This allows the DBA to trend the system usage, in particular between two similar baseline windows. For example, a DBA can setup a baseline to capture the weekly batch run. Each week, Oracle will track the key metrics each week when the batch runs. After an elapsed period of time the DBA can compare the baselines to see if the batch run times and resources are increasing, decreasing or staying the same.
The following dictionary views are useful for tracking baselines
In addition to comparing different baselines, it is possible to set thresholds against the metric in the baselines. This allows alerts to be raised if certain thresholds are breached.
New Maintenance Windows
In 11g there are a new set of automated maintenance tasks. The tasks are listed below
- Optimizer Statistics Gathering
- Segment Advisor
- Automatic SQL Tuning Advisor
These tasks are schedule during the daily and weekend maintenance windows using the default maintenance plan. However, they are NOT scheduler jobs.
Instead, the background process MMON wakes up the background ABP process and it is the ABP background process that schedules the maintenance jobs based on their definitions stored in the database.
In 11g contains a new feature that allows the IO subsystem to be calibrated. This is different from system statistics and unlike system statistics requires a quiet period on the system to run. It should NOT be run during heavy or buys times as the calibration will saturate the disks and cause a performance issue on the system.
It can be used via Enterprise Manager of using the PL/SQL package dbms_resource_manager_calibrate_io.
What is this for?
Various new functionality has been added to resource manager which is meant to allow for server and database consolidation scenarios.
Using resource manager is is possible to consolidate databases by placing multiple applications in one database and then limiting each application to a certain amount of resource. e.g. IO amount, IO request, CPU etc.
It is now possible perform instance caging on a multi chipped machine. For example, with two instances on a 4 CPU machine it is possible to restrict each instance to a set number of CPUs bu using the instance parameter CPU_COUNT and making sure resource manager is switched on by issuing the following command
ALTER SYSTEM SET resource_manager_plan= 'default_plan';
NOTE V$parameter will only show the values for a session, not necessarily for the entire system. Use v$system_parameter for actual global set values.
A lot of changes have been made to Enterprise Manager to allow easier usage of Resource Manger. The interface has been redesigned and there are more stats and graphs to monitor resource manager windows etc.
Oracle 11g has a completely new Fault Management framework. The framework is best accessed via Enterprise Manager using the support workbench. This allows for GUI interface access and use of
- The ADR (Automatic Diagnostic Repository)
- The ability to package logs
- The ability to view automatically package incidents and problems.
- Checking for Patches and Bugs
- Raising SRs
- Interacting with Metalink
- Monitoring SRs
- Applying patches
Most of this functionality can be accessed without Enterprise Manger using the ADR CLI; adrci.
11g has some new inbuilt health monitoring capabilities. In particular, it has checkers that can be accessed at command line via PL/SQL or the Enterprise Manger interface. They are
- DB Structure Integrity Check
- Data Block Integrity Check
- Transaction Integrity Check
- Redo Integrity Check
- Dictionary Integrity Check
- Undo Segment Integrity Check
- CF Block Integrity Check
Real Time SQL Monitoring
This component of Oracle allows the DBA to monitor SQL Statements in real time either by PL/SQL packages or the Enterprise Manager GUI.
ADDM now has and analyze database package which can be used to analyze RAC environments more easily
var tname varchar(60); BEGIN :tname:='my database ADDM task'; dbms_addm.analyze_db(:tname, 282,284) END;
It is also possible to direct ADDM to look at specific components. For example, an undersized SGA.
var tname varchar(60); BEGIN dbms_addm.insert_finding_directive (NULL, 'my undersized SGA directive', 'Undersized SGA', 2, 10); :tname:='my instance ADDM task'; dbms_addm.analyze_instance(:tname,1,2); END; / select dbms_addm.get_report(:tname) from dual;
Other such directive packages are
To deleted directives the following packages can be used
Advisor results are now classified and named in the view dba_advisor_findings
To find the different findings names issue the following SQL
select finding_name from dba_advisor_findings_names;
11g can now automatically tune and resize the SGA and PGA using the following parameters
They work similar to the SGA_MAX_SIZE and SGA_TARGET except they affect both the PGA and SGA
Setting MMT to 2Gb and MT to 1GB allows the DBA to increase the amount of memory to the instance dynamically at a later date. Having MT at 1GB will allow oracle to automatically tune the SGA and PGA within the 1Gb limit.
NOTE Setting MT to 2Gb and not setting MMT or setting it to 0 will default the MMT to 2Gb and likewise the other way around.
Any child parameters, for example, db_cache_size, log_buffer_pga_aggregate_target can still be set and will act as the minimum bounds for the areas they cover.
DB Smart Flash
DB Smart Flash is support for Solid State Disks. It is an extension of the buffer cache that resides on SSD.
Smart Flash can switched on with the following parameters
examples db_flash_cache_file='/dev/foa1' db_flash_cache_file='/work/flash_cache.dbf' db_flash_cache_file='+FLASH_DG/flash_cache' db_flash_cache_size=5G
Tables, indexes and clusters can be placed in the flash cache. How???
- Size of the flash cache should be between 2 and 10 times the size of the buffer cache
- The flash cache is not auto tunned
- Non standard block sizes are not supported
- Flash cache cannot be dynamically resize
- Adds 100-200 bytes to the normal buffer cache for each block in the flash cache.
Setting Table Statistic Preferences
There is a new procedure called set_table_prefs in the dbms_stats package.
This procedure allows the DBA to set the table or index statistics preferences to one of the following values
Determines whether the statistics are published to the catalog or stored in a holding area for later publish.
In addition to being able to publish statistics at a later date, it is possible to test the pending statistics in a session to check their validity.
For example, a DBA could run an SQL statement using the current statistics in their session. Then by setting the session
parameter OPTIMIZER_USE_PENDING_STATISTICS=TRUE, the DBA can test the same query using the pending statistics.
If the pending statistics show to be "good", then they can be publish to the dictionary using dbms_stats.publish_pending_stats
Determines the percentage value at which the statistics become stale. The default is 10%.
Allows the collection of global statistics in an incremental manner on partitioned objects. If DML activity on a partition in a table has not changed, then when oracle gathers global statistics it knows not to scan the partition as no DML activity has occurred and therefore the statistics have not changed.
Statistics for objects are now kept for a certain period of time. This allows DBAs to regress statistics to a previous point in time for a given table or index. The default is 31 days but this can be changed.
Hash Based Sampling of Column Statistics
A new hash based algorithm has been implemented for collecting column statistics. However, the algorithm is only invoked if auto_sample_size is used for estimate_percent.
Multicolumn Column Based Statistics
11g allows for the collection of statistics where column data is related. For example, if a table contains make and model for cars, then clearly these columns will be related as only Renault make Megane. As such, the optimizer can be directed to collect statistics for these related columns using the following commands
select dbms_stats.create_extended_stats('MySchema','MyTable','(make,model)') from dual; exec dbms_stats.gather_table_stats( 'MySchema', 'MyTable', method_opt=>'for all column size 1 for columns (make,model) size 3');
After running the create_extended stats they can be retrieved from the view dba_stat_extensions
11g now accommodates the collection of statistics on function based index columns. For example, if there is a function based index on model of the type upper(model), then these statistics can be gathered using the following commands
select dbms_stats.create_extended_stats('MySchema','MyTable','(upper(model))') from dual; exec dbms_stats.gather_table_stats( 'MySchema', 'MyTable', method_opt=>'for all column size 1 for columns (upper(model)) size 3');
DDL statements can now wait for DML locks to be released by setting the parameter DDL_LOCK_TIMEOUT=<timeout value secs> at the instance level or session level.
The lock table statement now has a wait parameter to tell Oracle to wait for a determined number of seconds to get a lock on a table.
There are a number of new columns in v$system_waits to report on foreground-only processes
There are a number of new columns in v$system_wait_class to report on foreground-only processes
Online redefinition now supports
- Tables with materialized views and view logs
- Triggers with ordering dependency
- Does not always invalidate dependent objects
Changing objects and the management of the dependent objects has been improved in the following ways
- Adding a column to a table no longer invalidates the dependent objects.
- Adding a PL/SQL unit to a package does not invalidate dependent objects.
Make sure the new PL/SQL object is added at the end of the package. Adding the object into the middle of the package will cause all subsequent objects in the package to be invalidated, although not the objects in the package that appear before the new object.
Dependencies are tracked automatically and no changes are required to take advantage of this feature.
It is now possible to create invisible indexes on a table.
This is achieved using the following command
By setting the session parameter OPTIMIZER_USE_INVISIBLE_INDEXES=TRUE, the optimizer can be forced to consider invisible indexes.
Adaptive Cursor Sharing
In previous versions, when oracle peaked a bind variable for the first run of the SQL it could often get an inappropriate value for the bind variable. In 11g, adaptive cursor sharing attempts to overcome this issue for statements that use bind variables.
On the first run of the SQL oracle will peak the variable and using the value create a selectivity cube. The cube is effectively a set of values that bind variables can fall into before a new plan is required.
Then on each subsequent run of the SQL oracle will peak the bind variables and check to see if the bind values fall outside of the selectivity cube. When it does, oracle creates a new selectivity cube. If the new cube overlaps the old cube then the cubes are merged. If they do not overlap then oracle will maintain two cubes.
This feature is implemented automatically and cannot be turned off.
The following views can be used to monitor adaptive cursor sharing
This feature is not compatible with Plan Baselines. If OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES is set to true, then only the first generated plan is used.
Oracle offer a rather cumbersome workaround to this. It is suggested that the DBA set the Baseline parameter to false and then run the application until all possible plans are loaded into the cursor cache. Then manually load the cursor cache into the relevant plan baseline.
SQL Query Result Cache
The SQL Query Cache is used to cache the results of a query for later use. This allows an SQL statement to retrieve the results from a cache without having to rerun the query. It is particularly useful for statements that scan many blocks, but retrieve few rows.
The cache is setup using the following initialization parameters
- RESULT_CACHE_MODE=[manual] [force] - Force means all results will be stored in the cache. Manual requires a hint on the SQL before results are stored.
- RESULT_CACHE_MAX_SIZE=[value] - If set to 0, then 0.25% of memory Target, 0.5% of SGA Target or 1% of Shared Pool Size
- RESULT_CACHE_MAX_RESULT=[percent] - Determines the maximum usage of the cache for a single SQL statement. Defaults to 5%.
- RESULT_CACHE_REMOTE_EXPIRATION=[time minutes] - Sets expiry time for results dependent on remote objects. 0 means no caching for remote objects.
The hint to be used when set to manual is /*+ RESULT_CACHE */. Alternatively, if set to force use /*+ NO_RESULT_CACHE */ to stop results being cached.
Caching can be set at the table level so that SQL operating on a table will inherit its caching parameters from the table. On multi table queries, then all tables need caching enabled before results are cached. The following commands turn caching on for tables
CREATE TABLE MyTable(col 1 char(10) RESULT CACHE (MODE FORCE) ALTER TABLE MyTable RESULT CACHE (MODE FORCE)
If the table mode is default, then caching is determined by the instance parameter. Hints override table level values and the default mode for tables is (mode default)
The following package can be used to view and manage the cache.
- select dbms_result_cache.status from dual; - View the status of the cache.
- execute dbms_result_cache.memory_report; - View statistics on the cache
- execute dbms_result_cache.flush; - Clear the cache
- execute dbms_result_cache.invalidate('MySchema,'MyTable'); - Invalidate the cache for a given object.
The following views can be used
- Flashback queries can be cached
- SQL using temporary of dictionary objects are not cached.
- Bind variable queries are cached but results only valid for queries with the same bind values.
- Queries built on non current data. i.e Read consistent view.
- Function results can be cached
OCI Client Side Caching
This allows for remote caching of client SQL results. It is turned on using the following parameters
On the instance
On the client in the sqlnet.ora
To use this feature, the client must be relinked with 11.1 or higher client and connected to an 11.1 higher server.
PL/SQL and JAVA Compilation
There have bee a few enhancements for PL/SQL and JAVA compilation.
PL/SQL Compilation Control
This can be controlled using the following parameter
- PLSQL_CODE_TYPE=[native][interpreted] - Native can be used by Oracle without any third party compilers and will not be slow than Oracle 10g and may be very much faster.
JAVA Compilation Control
This can be controlled using the following parameter
- JAVE_JIT_ENABLED=[true][false] - This is on by default and will improve the performance of JAVA.
The Set newname command in RMAN now has substitution variables for
- Database files
- Absolute File Number
- System generated file name
Below is an example of an RMAN run block
The following performance enhancements have been made
- Fast Incremental Backups on Physical Standby Databases - Physical Standby Databases can now have a block change tracking
- Improved Block Recovery - RMAN now tracks block corruption in v$database_block_corruption and flashback logs will be used if possible to recover corrupt blocks.
- Enhancements to Validate - Has number of new options including validate database and validate block
There are now four types of backup compression
All but default require Advanced Compression option.
UNDO blocks that are not required for recovery are no longer backed up.
VSS Enabled Backups
Support for Windows based VSS enabled backups is available for 11g.
Archive Log Backup Improvements
In environments where multiple components are used, for example, flashback, data guard, streams etc. that require archive logs, then Oracle ensures that the deletion policy only applies to the archive logs if all components have finished with the logs.
For Data Guard, this can be turned on using configure archivelog deletion policy to applied on standby
Parallel Backup for Large Files
RMAN backup command has a new parameter that allows RMAN to split large files across many channels.
BACKUP ..... SECTION SIZE [value] [K|M|G]
Prior to 11g a duplicate database command required the source database (target), the backup of the source database and the destination database to be on the destination system. In 11g, the duplicate command can operate across the network. As such, the duplicate command will instruct the target database on the target environment to copy image copies and archive logs across to the destination system.
The destination database (auxiliary) still needs to exist.
The RMAN command to archive this is detailed below.
duplicate target database to MyTestDB from active database spfile parameter_value_convert '/u01', '/u02' set "db_unique_name"="newdb" ... ... ;
duplicate target database to MyTestDB for standby from active database spfile parameter_value_convert '/u01', '/u02' set "db_unique_name"="newdb" ... ... ;
Furthermore, there is no longer a need to connect to the target database, but a connection must still be made to the target RMAN catalog. For example
RMAN> connect auxiliary sys@MyTestDB RMAN> connect catalog rcuser@RMANDB RMAN> duplicate database orcl to MyTestDB
There are now two extra parameters for duplicate database
- NOREDO - Used if the target database is in noarchivelog mode.
- UNDO TABLESPACE - Used if the target database is not open and no catalog connection is made.
Managing Archival Database Backups
It is now possible to create an archive backup. This is a backup that may need to kept for a set period of time for legal or business reasons. If a backup is set to an archival backup then it will be kept for the determined time regardless of the obsolete or recovery window specifications.
To archive a database use the following command
RMAN> change backup tag 'MyVeryImportnat Backup' keep forever;
RMAN> change backup tag 'MyVeryImportnat Backup' keep until time = '[date string]';
To change the status of a database copy use the following command
change copy of database controlfile nokeep;
All individual or possible restore points can be listed using
RMAN> list restore point all; RMAN> list restore point [name];
Recovery of Pfiles and Spfiles from Memory
It is now possible to create a pfile or spfile from memory with the following command
create spfile from memory;
Managing RMAN Catalogs
There is now an import catalog command to allow the merging of one or many rman catalogs into one.
RMAN> connect catalog cat111/oracle@destDB RMAN> import catalog cat101/oracle@srcDB RMAN> import catalog cat102/oracle@srcDB DBID=1234244,122343; RMAN> import catalog cat103/oracle@srcdb nounregister
The nounregister does not delete the old catalog.
The DBID only imports the catalogs for the given DBs.
There is now the concept of virtual catalogs which allow segregation of duties for DBAs/users where each DBA/User can be granted access to a virtual catalog.
The following details the improvements to Oracle's flahsback technology
Flashback Data Archives
Flashback Archive is an archive for historical data. It allows the grouping of objects into a Flashback Archive which in turn allows users to keep a full historical set of data that can be reference at a later date and kept for set period of time.
The data is kept in a seperate tablespace and is both secure and compressed.
An archive can be created using the create flashback archive command.
The following example shows how to enable and use flashback archive
create flashback archive flba1 tablespace tbs1 quota 10g retention 5 year; alter table mytable flashback archive flba1; select mycol from mytable as of timestamp to_timestamp ('2008-10-10 00:00:00','YYYY-MM-DD HH24:MI:SS');
It is possible to create a default flashback archive by adding a keyword to the create statement. This allows a simplified alter table statement.
create flashback archive default flba1 tablespace tbs1 quota unlimited retention 5 year; alter table mytable flashback archive; alter table mytable no flashback archive; #Use to disable flashback archive on a table.
Data can be purged, flashback archives dropped or modified and if required recovered.
alter flashback archive default flba1 purge before timestamp(systimestamp - interval '1' day); drop flashback archive flba1; alter flashback archive flba1 modify rentention 2 year; insert into mytable ( select * from mytable as of timestamp to_timestamp ('2008-10-10 00:00:00','YYYY-MM-DD HH24:MI:SS'));
There are a number of new views
- The database needs to be using automatic undo in order to use flashback archives.
- The use of certain DDL statements on tables involved in a flashback archive fail; conversions of long to lob, partition maintenance, upgrade table and drop table.
Flashback Transaction Backout
Flashback Transaction Backout is a logical recovery point for the rollback of commited transactions. To use this feature supplemental logging needs to be switched on.
alter database add supplemental log data; alter database add supplemental log data (primary key) columns ; alter database add supplemental log data (foreignkey) columns ; grant execute on dbms_flashback to MySchema; grant select any transaction tto MySchema;
Enterprise manager can be used to flashback a transaction or it can be done using dbms_flashback
This is does not require any database options.
Flashback database can be enabled whilst the database is up and running.
Flashback of a database can be monitored using v$session_longops.
LogMiner allows the auditing of changes on a database by accessing the redo logs. It requires supplemental logging switched on.
Data Pump now has a legacy mode. If it identifies any imp/exp commands it will enter legacy mode and map the old parameters with the new datapump parameters.
Data Recovery Advisor
A new advisor is available that helps with data recovery. The advisor performs the following tasks
- Analyses data failures
- Lists failures by severity
- Advises on repair options
- Choose and execute option
- Performs proactive checks after repair
It can be accessed via the RMAN interface or via Enterprise Manager.
It is limited to Single instance databases and cannot be used for RAC environments, although it can be used in supporting failover to a standby database, but not in the analysis of a failed standby database nor its recovery.
The following RMAN command will list database failures
RMAN> connect target / nocatalog RMAN> list failure all; #Can use critical, high, low or closed instead of all
The other commands that can be used with the advisor are
- advise failure - Displays recommended commands to fix the failure
- repair failure - Runs the recommend commands to fix the failure. This must run in the same RMAN session as the above command.
- change failure - Changes the priority or closes failures.
The following views keep track of failures
New Corruption Detection Parameters
There are two new database parameters to help detect and prevent block corruption.
- db_lost_write_protect - Protects against lost writes to standby databases
- db_ultra_safe - When set to on ensures db_block_checksum and db_block_checking are effectively set to their safest values.