Скачать презентацию Version 11 70 Overview John F Miller III Скачать презентацию Version 11 70 Overview John F Miller III

84db4233274048a40f3318636b3742aa.ppt

  • Количество слайдов: 124

Version 11. 70 Overview John F. Miller III, IBM 0 Version 11. 70 Overview John F. Miller III, IBM 0

Talk Outline • 11. 70 Overview • Storage Enhancements – Storage Provisioning – Storage Talk Outline • 11. 70 Overview • Storage Enhancements – Storage Provisioning – Storage Optimization – Compression • Index Improvements • Fragmentation / Partitioning – Interval Fragmentation – Add and Drop Fragments – Forest of Tree Indexes – Create Index extent sizes – Constraint without an index • Miscellaneous – Network Performance – Pre-Load C-UDRs 1 Online – Fragment Level Statistics • Data Warehouse – Multi Index Scans – Star and Snowflake joins Page 1

Storage Provisioning 2 Storage Provisioning 2

What is Storage Provisioning • To proactively or reactively add storage to eliminate out What is Storage Provisioning • To proactively or reactively add storage to eliminate out of space errors – Monitoring spaces and automatically grow a container when its free space falls below a specific amount. – Stalling an SQL which is about to fail because of insufficient space until space is allocated to the depleted container • The ability to tell Informix about disk space that can be used to solve storage issues in the future – Raw Devices – Cooked Files – Directories 3

Benefits of Storage Provisioning • Benefits of Storage Provisioning • "Out-of-space" errors are virtually eliminated. • Manual expansion and creation of storage spaces without having to worry about where the space will come from • Automatic expansion of dbspaces, temporary dbspaces, sbspaces, temporary sbspaces, and blobspaces. • Feature is fully incorporated into OAT. 4

Storage Provisioning: The Power of 2 • Two available modes: – Manual – Automatic Storage Provisioning: The Power of 2 • Two available modes: – Manual – Automatic • Two available space expansion methods: – Chunk extension – Chunk creation • Two available interfaces: – sysadmin task()/admin() functions (SQL interface) – OAT (Graphical interface) Page 5

Storage Pool Facts • What is the Storage Pool – How the DBA tell’s Storage Pool Facts • What is the Storage Pool – How the DBA tell’s Informix about space it can use to solve future space issues – A file, device, or directory in the pool is called an entry. • There is one storage pool per IDS instance. • You can add, modify, delete and purge storage pool entries. EXECUTE FUNCTION task("storagepool add", “/work/dbspaces/dbs 1", “ 0", “ 1 GB", “ 100 MB", “ 1"); 6

OAT’s View of the Storagepool Automatic policies Summary of space left in the storage OAT’s View of the Storagepool Automatic policies Summary of space left in the storage pool 7

Extendable Chunks • The ability to expand an existing chunk • Default upon creation Extendable Chunks • The ability to expand an existing chunk • Default upon creation is non-expanding chunks • Example of enabling the extendable property of chunk 13 EXECUTE FUNCTION task(“modify chunk extendable on”, “ 13”) • A Chunk can be extended automatically or manually • Example of manually extending chunk 27 by 2 GB EXECUTE FUNCTION task(“modify chunk extend”, “ 27”, “ 2 GB”); • Extending chunks do NOT consume space from the storagepool 8

OAT’s View of the Chunk Pod Fragmentation map of selected chunk Chunk Actions • OAT’s View of the Chunk Pod Fragmentation map of selected chunk Chunk Actions • Extend a Chunk • Add a new chunk • Modify chunk settings • Drop a chunk 9

Expanding a Storage Container • Keep the addition of space to a storage container Expanding a Storage Container • Keep the addition of space to a storage container simple – The creator of a storage container specifies how a space should grow – Manual allocations of space, Just say do it • Use the predefined container provisioning policies to allocated new space to a container 1. Determines if any chunk in the storage container is expandable 2. If no chunk can successfully expand, then add a new chunk 10

Expanding a Space in OAT Page 11 Expanding a Space in OAT Page 11

Creating or Dropping a Space with the Storagepool • You can create a new Creating or Dropping a Space with the Storagepool • You can create a new storage container utilizing the space from the storage pool • Example of create a 100 MB dbspace called orders_dbs EXECUTE FUNCTION ADMIN ('create dbspace FROM STORAGEPOOL', 'orders_dbs', '100 M') • You can drop an existing storage container and return the space to the storage pool • Example of dropping a dbspace called dbs 1 EXECUTE FUNCTION ADMIN ('drop dbspace to storagepool', 'dbs 1'); 12

Save your Company 13 Save your Company 13

Prevent Accidental Disk Initialization • Save companies from potential disasters • Accidental disk re-initialization Prevent Accidental Disk Initialization • Save companies from potential disasters • Accidental disk re-initialization (i. e. oninit –i) • New onconfig FULL_DISK_INIT Value 0 Only allow system initialization if page zero of rootdbs is not recognized 1 Always allow system initialization After initialization, value is automatically set to 0 14

Storage Optimization 15 Page 15 Storage Optimization 15 Page 15

Optimizing Tables • As a DBA I need to … – Reduce the number Optimizing Tables • As a DBA I need to … – Reduce the number of extents a table contains – Move all rows to the beginning of a table – Return unused space at the end of a table to the system – Shrink a partial used extent at the end of a table All this while accessing and modifying the table!!! AND While you are watching your favorite TV show Page 16

Storage Management Overview • Defragment Extents – Combine extents reduce the • Data Compression Storage Management Overview • Defragment Extents – Combine extents reduce the • Data Compression – Reduces the amount of storage taken by a single row • Table Compaction – Reduce the number of pages utilized by a table • Index Compaction – Ensure the index pages are kept full • Automate the optimization of table storage – Applies policies to optimize tablese 17

Optimizing Table Extents - Defragment • The number of extents a table/partition dbspace 1 Optimizing Table Extents - Defragment • The number of extents a table/partition dbspace 1 Customer Extent 1 can have has increased Orders Extent 1 • Defragment Extents – Moves extents to be adjacent – Merges the extents into a single Customer Extent 2 Items Extent 1 Customer Extent 32 Orders Extent 2 extent Customer Extent 42 3 Example Products Extent 1 EXECUTE FUNCTION ADMIN (‘DEFRAGMENT', ‘db 1: customer') Customer Extent 53 4 Items Extent 2 Number of extents for the customer table 3 4 5 MERGE Page 18 Customer Extent 1

Optimizing Tables and Indexes 19 Optimizing Tables and Indexes 19

Defragment Table Extents On. Line Page 20 Defragment Table Extents On. Line Page 20

Data Compression • • Reduce the space occupied by the row Compressing a table Data Compression • • Reduce the space occupied by the row Compressing a table can be done online Compress either a table or fragment Custom dictionary built for each fragment to ensure highest levels of compression • Tables with compressed rows are ALWAYS variable length rows • Many Benefits § Smaller Archives § More data in the buffer pool § Fewer long/forwarded rows § Few I/O for same amount of data read/written execute function task(“compress table”, “tab 1”, ”db”) 21

REPACK Command Customer • Moves all rows in a table/fragment to the beginning, leaving REPACK Command Customer • Moves all rows in a table/fragment to the beginning, leaving all the free space at the end of the table • Online operation, users can be modifying the table Tim Frank Chris Jamie Lenny execute function task(“table repack”, “customer”, ”db”) Roy Travis Steve John 22

SHRINK Command Customer • Frees the space at end of table so other table SHRINK Command Customer • Frees the space at end of table so other table can utilize this space John Tim – Entire extents are free – The last extent in a table can be partially freed – Will not shrink a table smaller than the first extent size • New command to modify first extent size • “ALTER TABLE MODIFY EXTENT SIZE” • Online operation execute function task(“table shrink”, “customer”, ”db”) 23 Steve Frank Travis Chris Jamie Roy Lenny

Automatically Optimize Data Storage 24 Automatically Optimize Data Storage 24

Index Optimization 25 Page 25 Index Optimization 25 Page 25

Create Index with a Specific Extent Size • Create Index with a Specific Extent Create Index with a Specific Extent Size • Create Index with a Specific Extent Size – The create index syntax has been enhanced to support the addition of an extent size for indexes – Better sizing and utilization CREATE INDEX index_1 ON tab_1(col_1) EXTENT SIZE 32 NEXT SIZE 32; • Default index extent size is the – index key size / data row size * data extent size Page 26

Create Index Extent Sizes create index cust_ix 1 on customer (cust_num) in rootdbs extent Create Index Extent Sizes create index cust_ix 1 on customer (cust_num) in rootdbs extent size 80 next size 40 ; • Ability to specify the extent size when creating an index • Allow for optimal space allocation • Utilities such as, dbschema, report index extent size Page 27

Creating Constraints without an Index CREATE TABLE parent(c 1 INT PRIMARY KEY CONSTRAINT parent_c Creating Constraints without an Index CREATE TABLE parent(c 1 INT PRIMARY KEY CONSTRAINT parent_c 1, c 2 INT, c 3 INT); CREATE TABLE child(x 1 INT, x 2 INT, x 3 VARCHAR(32)); ALTER TABLE child ADD CONSTRAINT (FOREIGN KEY(x 1) REFERENCES parent(c 1) CONSTRAINT cons_child_x 1 INDEX DISABLED); • Saves the overhead of the index for small child tables Page 28

B-Tree Index The figure above one Root Node and access to the underlying twig B-Tree Index The figure above one Root Node and access to the underlying twig and leave pages are through this page, which is where there can be mutex contention 29 Page 29

New Index Type “Forest Of Trees” • Traditional B-tree index suffer from performance issues New Index Type “Forest Of Trees” • Traditional B-tree index suffer from performance issues when many concurrent users access the index – Root Node contention can occur when many session are reading the same index at the same time – The depth of large B-tree index increases the number of levels created, which results in more buffer reads required • Forest Of Tress (FOT) reduces some of the B-tree index issues: – Index is larger but often not deeper v. Reduces the time for index traversals to leaf nodes – Index has multiple subtrees (root nodes) called buckets v. Reduces root node contention by enabling more concurrent users to access the index 30 Page 30

Forrest of Trees Index (FOT Index) • Reduces contentions on an indexes root node Forrest of Trees Index (FOT Index) • Reduces contentions on an indexes root node • Several root nodes • Some B-Tree functional is NOT supported – max() and min() create index_2 on TAB 1( C 1, C 2 ) hash on ( C 1 ) with 3 buckets; Page 31

FOT - Determining use - ONCHECK & SYSMASTER • Check oncheck –p. T information FOT - Determining use - ONCHECK & SYSMASTER • Check oncheck –p. T information Average Level Average Total No. Keys Free Bytes ---------1 2 100 655 27 15 21350 4535 ---------Total 755 42 25885 There are 100 Level 1 buckets (Root Nodes) • Check sysmaster database select nhashcols, nbuckets from sysindices 32

Network & UDR Performance 33 Page 33 Network & UDR Performance 33 Page 33

Network Performance Improvements • Caching network services • Multiple listener threads for a single Network Performance Improvements • Caching network services • Multiple listener threads for a single server name • Multiple file descriptor servers • Previous network improvements – Dynamic start and stop of listener threads – Pre-allocate users session 34 Page 34

Network Performance - Caching Network Services • • Database caching of Host, Services, Users Network Performance - Caching Network Services • • Database caching of Host, Services, Users and Groups Avoids going to the operating system for each network call Administrator defined timeout value set for network caches ONCONFIG example NS_CACHE host=900, service=900, user=900, group=900 • Each cache is dynamically configurable • onstat –g cache prints out how effectiveness of the caches 35 Page 35

Network Performance – Multiple Listeners • Able to define multiple listener threads for a Network Performance – Multiple Listeners • Able to define multiple listener threads for a single DBSERVERNAME and/or DBSERVERALIAS • Add the number of listeners to the end of the alias • EXAMPLE – To start three listener threads for the idsserver – Modify the ONCONFIG as follows DBSERVERNAME idsserver-3 36 Page 36

Network Performance Results • My simple network performance tests – 200 users connecting and Network Performance Results • My simple network performance tests – 200 users connecting and disconnecting • Connection throughput on an AIX server improved by 480% • Connection throughput on a Linux server improved by 720% Computer Type AIX 64 Linux 64 37 Without Improvements Utilizing Improvements 2 m 5 s 27 s 10 m 11 s 1 m 20 s Page 37

Improve Throughput of C User Defined Routines (C-UDR) • Preloading a C-UDR shared library Improve Throughput of C User Defined Routines (C-UDR) • Preloading a C-UDR shared library allows Informix threads to migrate from one VP to another during the execution of the C-UDR – Increase in performance – Balance workloads • Without this feature – The C UDR shared libraries are loaded when the UDRs are first used – The thread executing the UDR is bound to the VP for the duration of the C-UDR execution PRELOAD_DLL_FILE 38 $INFORMIXDIR/extend/test. udr /var/tmp/my_lib. so Page 38

Verifying the C-UDR shared library is preloaded • online. log during server startup 14: Verifying the C-UDR shared library is preloaded • online. log during server startup 14: 23: 41 Loading Module 14: 23: 41 The C Language Module loaded • onstat –g dll new flags – ‘P’ represents preloaded – ‘M’ represents thread can migrate Datablades: addr slot vp baseaddr flags filename 0 x 4 b 247310 15 1 0 x 2 a 985 e 3000 PM 0 x 4 c 2 bc 310 15 2 0 x 2 a 985 e 3000 PM 0 x 4 c 2 e 5310 15 3 0 x 2 a 985 e 3000 PM 39 /var/tmp/test. udr

Update Statistics 40 Page 40 Update Statistics 40 Page 40

Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Flexible Grid with Improved Availability • Easy Embeddability • Expanded Warehouse Infrastructure • Empower Applications Development • Enhanced Security Management • Increased Performance • Other Features 41 41

Seamless installation and Smarter configuration • Can migrate from Informix Version 11. 50 11. Seamless installation and Smarter configuration • Can migrate from Informix Version 11. 50 11. 10, 10. 0, 9. 40, or 7. 31 directly to Informix Version 11. 70 • New installation application, using the new ids_install command, makes it easier to install and configure Informix products and features – A typical installation now has improved default settings to quickly install of the products and features in the software bundle, with preconfigured settings – The custom installation is also smarter than before and allows you to control what is installed • Both types of installations allow you can create an instance that is initialized and ready to use after installation • Must use a custom installation setup if you want to configure the instance for your business needs 42

Changes to Installation Commands • Some installation commands changed – To take advantage of Changes to Installation Commands • Some installation commands changed – To take advantage of new and changed functionality – To improve consistency across products and operating systems • Depreciated commands – installserver – installclientsdk – installconn • Must use ids_install to install Informix with or without bundled software • New uninstallids command – Removes the server, any bundled software, or both – To remove specific products • • uninstall/uninstall_server/uninstallserver uninstall/uninstall_clientsdk/uninstallclientsdk uninstall/uninstall_connect/uninstallconnect (formerly uninstallconn) uninstall/uninstall_jdbc/uninstalljdbc. exe or java -jar uninstall/uninstall_jdbc/uninstaller. jar (depending on how you install the JDBC driver) 43

Auto-Registration and Auto VP Creation • Database extensions (formerly known as built-in Data. Blade Auto-Registration and Auto VP Creation • Database extensions (formerly known as built-in Data. Blade modules) are automatically registered • Prerequisite tasks, such as registering the extensions or creating specialized virtual processors, no longer required • The BTS, WFSVP, and MQ virtual processors are created automatically • The idsxmlvp virtual processor is created automatically when an XML function is first used • An sbspace is created automatically for basic text searches and spatial extensions, if a default sbspace does not exist • Basic Text Search, Web Feature Service, Node, Spatial, Binary, Large Object Locator, Timeseries, MQ Messaging, and Informix web feature service now be used without first registering them in your database 44

dbschema and dbexport Enhancements • dbschema and dbexport utility enhancement for omitting the specification dbschema and dbexport Enhancements • dbschema and dbexport utility enhancement for omitting the specification of an owner – Can use the new –nw option to generate the SQL for creating an object without specifying an owner 45

Generating Storage Spaces and Logs with dbschema SQL administration API format • Can now Generating Storage Spaces and Logs with dbschema SQL administration API format • Can now generate the schema of storage spaces, chunks, and physical and logical logs with the dbschema utility # Dbspace 1 -- Chunk 1 EXECUTE FUNCTION TASK ('create dbspace', 'rootdbs', '/export/home/informix/data/rootdbs', '200000', '2', '500', '100') # Dbspace 2 -- Chunk 2 EXECUTE FUNCTION TASK ('create dbspace', 'datadbs 1', '/export/home/informix/datadbs', '5000000', '2', '100') # Physical Log EXECUTE FUNCTION TASK ('alter plog', 'rootdbs', '60000') # Logical Log 1 EXECUTE FUNCTION TASK ('add log', 'rootdbs', '10000') • Choose to generate: – SQL administration API commands dbschema -c dbschema 1. out – onspaces and onparams utility commands dbschema -c –ns dbschema 2. out • For migrations, generate the schema before unload data using the dbexport 46 and dbimport utilities onspaces/onparams format # Dbspace 1 -- Chunk 1 onspaces -c -d rootdbs -k 2 -p /export/home/informix/data/rootdbs -o 0 -s 200000 -en 500 -ef 100 # Dbspace 2 -- Chunk 2 onspaces -c -d datadbs 1 -k 2 -p /export/home/informix/data/usrdbs -o 0 -s 5000000 -en 100 -ef 100 # Logical Log 1 onparams -a -d rootdbs -s 10000

Support for the IF EXISTS and IF NOT EXISTS keywords • Now you can Support for the IF EXISTS and IF NOT EXISTS keywords • Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database) • You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database) – If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application • Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax 47

Simplified SQL syntax for Defining Database Tables • No more restrictions on the order Simplified SQL syntax for Defining Database Tables • No more restrictions on the order in which column attributes can be defined in DDL statements – Simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements • The specifications for default values can precede or follow any constraint definitions • List of constraint definitions can also be followed (or preceded) by the default value, if a default is defined on the column • The NULL or NOT NULL constraint does not need to be listed first if additional constraints are defined • Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax 48

Stored Procedure Debugging (SPD) • Need for application developers to debug SPL procedures in Stored Procedure Debugging (SPD) • Need for application developers to debug SPL procedures in Informix when necessary – Should be able to execute the SPL routine line by line, stepping into nested routines, analyzing the values of the local, global and loop variables – Should be able to trace the execution of SPL procedures • Trace output should show the values of variables, arguments, return values, SQL and ISAM error codes • Pre-requisites – Informix 11. 70 or above – Integration with the Optim Data Studio procedure debugger – Integration with Microsoft Visual Studio debugger • DRDA must be enabled 49

SPD - Supported Commands • Breakpoints • Run • Step Over • Step Into SPD - Supported Commands • Breakpoints • Run • Step Over • Step Into (for nested procedures) • Step Return • Get Variable value • Set Variable value 50

Explicit PDQ vs Implicit PDQ • Explicit PDQ – User setting (SET PDQPRIORITY statement) Explicit PDQ vs Implicit PDQ • Explicit PDQ – User setting (SET PDQPRIORITY statement) – All queries in current session use same setting • Implicit PDQ – IDS determines resource requirement based on optimizer's estimates – Each query can have different PDQ setting 51

Implicit PDQ - Enable • SET ENVIRONMENT IMPLICIT_PDQ ON/OFF – Enable/disable implicit PDQ for Implicit PDQ - Enable • SET ENVIRONMENT IMPLICIT_PDQ ON/OFF – Enable/disable implicit PDQ for current session – When enabled • Informix automatically determines an appropriate PDQPRIORITY • value for each query Informix ignores explicit PDQ setting unless BOUND_IMPL_PDQ is also set – When disabled • Informix does not override the current PDQPRIORITY setting • SET ENVIRONMENT BOUND_IMPL_PDQ ON/OFF – Use explicit PDQPRIORITY setting as upper bound when calculating implicit PDQ setting 52

Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Flexible Grid with Improved Availability • Easy Embeddability • Expanded Warehouse Infrastructure • Empower Applications Development • Enhanced Security Management • Increased Performance • Other Features 53 53

Deployment Assistant (DA) – Self Configuring • Enables users to easily package snapshots of Deployment Assistant (DA) – Self Configuring • Enables users to easily package snapshots of Informix instances and/or their data, in preparation for deployment – In past releases a snapshot had to be manually created • Built-in intelligence to capture and configure an Informix snapshot more easily – Allows for reduction of the packaged instances to the user's minimum desired configuration • Graphical User Interface (GUI) developed in Java/Eclipse SWT • ifxdeployassist command – Starts the deployment assistant interface, which prompts for the required information to capture the instance 54

Deployment Assistant (DA) – Packages • Produces packages that are ready for use by Deployment Assistant (DA) – Packages • Produces packages that are ready for use by the Deployment Utility (DU) • Build a package containing – Informix – (Optional) pre-built database(s) – (Optional) applications • Compress the package without using 3 rd party compression tools • • (BZIP 2, GZIP, TAR, and ZIP) Deploy, decompress, and install the package on multiple systems Good for media distribution such as CDs Supported on Windows and Linux No current support for data on RAW devices 55

Deployment Assistant (DA) – Usage • To run the Deployment Assistant, run the following Deployment Assistant (DA) – Usage • To run the Deployment Assistant, run the following command in /bin: ifxdeployassist • On Windows, executing this command with the INFORMIXSERVER environment variable set will trigger automatic detection of the instance specified 56

Deployment Utility (DU) – New Options • IDS 11. 50. x. C 6 – Deployment Utility (DU) – New Options • IDS 11. 50. x. C 6 – Available on all platforms – ifxdeploy – Can deploy a pre-configured snapshots of an IDS instances on one or more – – machines by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on Can create new instances from existing ones or from onconfig. std; or uninstall instances Chunks can be dynamically relocated to a new path at deployment time • New command line options (11. 70) – -start option will start the Informix instance after deployment and wait for it to be initialized (equivalent to running oninit –w) • Optionally add a number of seconds to wait before returning the command • The Deployment Utility configuration file has a new option START – -autorecommend option calculates optimal values for Informix configuration parameters based on planned usage for the instance and the host environment 57

Deployment Utility (DU) – Example • To deploy a zipped tar file of an Deployment Utility (DU) – Example • To deploy a zipped tar file of an instance that: – – – Prints verbose messages Sets the SERVERNUM to 2 Relocates the chunks to “/work/chunks” Sets new TCP listening ports 9091 Starts the instance after deployment export INFORMIXDIR=/work/ixdir 2; export INFORMIXSERVER=ixserver 2; ifxdeploy -file /work/snapshots/ifxdir. tgz -verbose -servernum 2 -relocate /work/chunks -rootpath /work/chunks -sqliport 9091 -drdaport 9092 -start –y • To create and start a new instance using an existing INFORMIXDIR: export INFORMIXDIR=/work/ixdir; export INFORMIXSERVER=ixserver 2; ifxdeploy -servernum 2 -sqliport 9091 -drdaport 9092 -start –y 58

Unique Event Alarms • Informix uses the event alarm mechanism to notify the DBA Unique Event Alarms • Informix uses the event alarm mechanism to notify the DBA about any major problems in the database server • Default alarm program scripts – UNIX • $INFORMIXDIR/etc/alarmprogram. sh – Windows • %INFORMIXDIR%etcalarmprogram. bat • ONCONFIG parameters – ALARMPROGRAM – SYSALARMPROGRAM 59

Unique Event Alarms - Overview • Informix 11. 70 has 79 Event Class IDs Unique Event Alarms - Overview • Informix 11. 70 has 79 Event Class IDs • For each of these event alarm class, there could be multiple • • • specific messages used by an event alarm class In previous releases, not easy differentiating between one type of event alarm vs. another for the same event alarm class – Required the user to parse the specific message string which goes with the alarm program as one of its parameters Very inconvenient for applications which deeply embed IDS Panther provides unique numerical values for each specific message – Applications can interpret and take actions against each event alarm 60

Programmability Enhancements • Consistent return codes for server initialization (oninit) • Very helpful for Programmability Enhancements • Consistent return codes for server initialization (oninit) • Very helpful for application which administer Informix in deep embedded • • environments The application can take the appropriate action to bring the instance On. Line successfully During server initialization in embedded environments, the application may have to take actions for: – Shared memory creation/initialization failed – Could not find libelf/libpam/… Sample Shell Script – Incorrect command line syntax – Error reading/updating onconfig – Error calculating defaults in onconfig – Incorrect serial number – Not DBSA – Incorrect SQLHOSTS entries #!/bin/sh # Execute the oninit program oninit #Get the return code from oninit execution RC=$? # Validate the retun code and take necessary action case $RC in 0) echo "RC=0: The database server was initialized successfully. " ; ; 1) echo "RC=1: Server initialization has failed. " ; ; 187) echo "RC=187: Check the entries in sqlhosts file. " ; ; 221) echo "RC=221: DUMPDIR missing. Creating DUMPDIR. " mkdir $INFORMIXDIR/tmp chmod 770 $INFORMIXDIR/tmp ; ; *) echo "Return Code=$RC !" ; ; esac 61

Embeddabillity – Other Features • Automated DB Scheduler tasks added – Automatic notification when Embeddabillity – Other Features • Automated DB Scheduler tasks added – Automatic notification when IDS marks an index “bad” – Automatic table storage optimization based on user settable parameters • Informix Embeddability toolkit – Tutorial for creating an end to end embeddability scenario – Example scripts for using Deployment Assistant/Utility • Install and Deployment API’s – API’s to install and configure Informix from your application 62

Enhanced Security Management 63 Page 63 Enhanced Security Management 63 Page 63

Selective Row Level Auditing (SRLA) • onaudit – Manages audit masks and configuration – Selective Row Level Auditing (SRLA) • onaudit – Manages audit masks and configuration – Need to be DBSSO or AAO – DBSSO can perform functions related to audit setup – AAO can perform functions related to audit analysis – Examples • onaudit –l 1 • onaudit –c • onaudit –a –u sqlqa –e +RDRW • onshowaudit – Lets AAO extract information from an audit trail – Example: onshowaudit –n 64

Selective Row Level Auditing (SRLA) – What’s New? • Previously, there was no way Selective Row Level Auditing (SRLA) – What’s New? • Previously, there was no way to enable auditing so that it excluded audit events on tables that you did not want to monitor with the onaudit utility – Enabling can produce huge amounts of useless data • The database system security officer (DBSSO) can now configure auditing so that row-level events are recorded for designated tables – Versus for ALL tables used by the database server • Ability to select only the tables that you want to audit on the row level – Can improve database server performance, simplify audit trail records, and mine audit data more effectively 65

SRLA – Syntax • New table level property added (AUDIT) – CREATE TABLE {existing SRLA – Syntax • New table level property added (AUDIT) – CREATE TABLE {existing syntax} [with AUDIT]; – ALTER TABLE {existing syntax} [add AUDIT]; [drop AUDIT]; • ADTROWS – New parameter to Audit configuration file - adtcfg – 0 • NO changes in existing row level auditing behavior (default) – 1 • SRLA is enabled and only "audit" enabled tables • Will generate row-level audit records 66

Trusted Context – Why use it? • Trusted Context is a feature developed by Trusted Context – Why use it? • Trusted Context is a feature developed by DB 2 • Allow connection reuse under a different userid with authentication to avoid the overhead of establishing a new connection • Allow connection reuse under a different userid without authentication – Accommodate application servers that need to connect on behalf of an end-user but do not have access to that enduser’s password to establish a new connection on their behalf • Allow users to gain additional privileges when their connection satisfies certain conditions defined at the database server 67

Trusted Context – What is it? • Database object created by the database security Trusted Context – What is it? • Database object created by the database security administrator (DBSECADM) – Defines a set of properties for a connection that when met, allow that connection to be a “trusted connection” with special properties • The connection must be established by a specific user • The connection must come from a trusted client machine • The port over which the connection is made must have the required encryption • If the above criteria are met, the connection will allow changes in userid and privileges as defined in the trusted context 68

Trusted Context – Steps • Step 1: Create Trusted Context Objects – Created at Trusted Context – Steps • Step 1: Create Trusted Context Objects – Created at database level – Must be created by DBSECADM before Trusted Connections can be established – Can use OS users or Mapped Users • Step 2: Establish Trusted Connections – Must satisfy criteria defined in Trusted Context – Provision to Switch User – Use transactions within switched user session 69

Trusted Context – Steps CREATE TRUSTED CONTEXT CTX 1 BASED UPON CONNECTION USING SYSTEM Trusted Context – Steps CREATE TRUSTED CONTEXT CTX 1 BASED UPON CONNECTION USING SYSTEM AUTHID BOB DEFAULT ROLE MANAGER ENABLE ATTRIBUTES (ADDRESS '9. 26. 113. 204') WITH USE FOR JOE, MARY WITHOUT AUTHENTICATION • Creates an Trusted Context object named CTX 1 • Will allow connections from 9. 26. 113. 204 • Can switch to user Joe or Mary once Trusted Connection established 70

Trusted Context – Switching Users • Switch to any user defined in the Trusted Trusted Context – Switching Users • Switch to any user defined in the Trusted Context Object scope • Perform database operations • Audit records will show the switched user as the originator of the operations • If using transactions, commit or rollback before switching to a new user 71

Informix Mapped Users • Can now configure Informix so that users no longer require Informix Mapped Users • Can now configure Informix so that users no longer require operating system accounts to connect – Allows users authenticated by an external authentication service (such as Kerberos or Microsoft Active Directory) to connect to Informix • When a DBSA turns on the USERMAPPING parameter of the onconfig file and maps externally authenticated users to user properties in tables of the SYSUSER database • Onconfig variable – USERMAPPING OFF|ADMIN|BASIC 72

Informix Mapped Users – Example • grant access to bob properties user fred; – Informix Mapped Users – Example • grant access to bob properties user fred; – This means that when 'bob' connects to Informix, as far as the operating system access is concerned, Informix will use the UID, GID(s) and home directory for user 'fred' (which must be a user name known to the o/s) • grant access to bob properties user fred, group (ifx_user), userauth (dbsa); – This is similar to the previous entry. User 'bob' will use UID 3000 ('fred') and GIDs 3000 (users), 200 (staff) and the extra group 1000 (ifx_user) – Additionally, assuming that USERMAPPING is set to ADMIN in the ONCONFIG file, then 'bob' will be treated as a DBSA 73

Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Flexible Grid with Improved Availability • Easy Embeddability • Expanded Warehouse Infrastructure • Empower Applications Development • Enhanced Security Management • Increased Performance • Other Features 74 74

What is a Flexible Grid? • A named set of interconnected servers for propagating What is a Flexible Grid? • A named set of interconnected servers for propagating commands from an authorized server to the rest of the servers in the set • Useful if you have multiple servers and you often need to perform the same tasks on every server • The following types of tasks are easily run through a grid: – Administering servers – Updating the database schema and the data – Running or creating stored procedures or UDRs – Managing and Maintaining replication 75

What are the features of the new Informix Flexible Grid? • Nodes in grid What are the features of the new Informix Flexible Grid? • Nodes in grid do not have to be identical – Different tables, different hardware, different OS’s, different IDS versions • Simplify creation and maintenance of a global grid – Create grid, attach to grid, detach from grid, add/drop node – – to/from Grid DDL/DML operations on any node propagated to all nodes in the Grid Management of grid can be done by any node in the grid Tables no longer require primary keys Integration with Open. Admin Tool (OAT) 76

Define/Enable/Disable the Grid • The GRID is managed by using the cdr utility • Define/Enable/Disable the Grid • The GRID is managed by using the cdr utility • Define OAT support enabled – Defines the nodes within the grid cdr define grid --all cdr define grid • Enable – Defines the nodes within the grid which can be used to perform a grid level operation – Also is used to determine which users are allowed to perform the grid operation cdr enable grid –grid= --user= -node= • Disable – Used to remove a node or user from being able to perform grid operations cdr disable grid –grid= --node= cdr disable grid –grid= --user= cdr disable grid –g -n -u 77

Propagating database object changes • Can make changes to database objects while connected to Propagating database object changes • Can make changes to database objects while connected to the grid and propagate the changes to all the servers in the grid • Can propagate creating, altering, and dropping database objects to servers in the grid • The grid must exist and the grid routines must be executed as an authorized user from an authorized server • To propagate database object changes: – Connect to the grid by running the ifx_grid_connect() procedure – Run one or more SQL DDL statements – Disconnect from the grid by running the ifx_grid_disconnect() procedure 78

Example of DDL propagation execute procedure ifx_grid_connect(‘grid 1’, ‘tag 1’); create database tstdb with Example of DDL propagation execute procedure ifx_grid_connect(‘grid 1’, ‘tag 1’); create database tstdb with log; create table tab 1 ( col 1 int primary key, col 2 int, col 3 char(20)) lock mode row; create index idx 1 on tab 1 (col 2); create procedure loadtab 1(maxnum int) define tnum int; for tnum = 1 to maxnum insert into tab 1 values (tnum, tnum * tnum, ‘mydata’); end for: end procedure; execute procedure ifx_grid_disconnect(); 79 Will be executed on all nodes within the ‘grid 1’ GRID

Monitoring a Grid NEW: Monitor a cluster onstat -g cluster • cdr – – Monitoring a Grid NEW: Monitor a cluster onstat -g cluster • cdr – – – list grid View information about server in the grid View the commands that were run on servers in the grid Without any options or a grid name, the output shows the list of grids • Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*) • When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server • Options include: --source= --summary --verbose --nacks --pending cdr list grid 1 80

Informix Flexible Grid – Requirements • Requirements – Enterprise Replication must be running – Informix Flexible Grid – Requirements • Requirements – Enterprise Replication must be running – Servers must be on Panther (11. 70. x. C 1) • Pre-panther servers within the ER domain cannot be part of the GRID 81

Informix Flexible Grid Quickly CLONE a Primary server • Previously, to clone the Primary Informix Flexible Grid Quickly CLONE a Primary server • Previously, to clone the Primary – – Create a level-0 backup Transfer the backup to the new system Restore the image Initialize the instance • ifxclone utility – Clones a primary server with minimal setup and configuration – Starts the backup and restore processes simultaneously • No need to read or write data to disk or tape – Can create a standalone server or a remote standalone – – – secondary server Add a server to a replication domain by cloning Requires the DIRECT_IO configuration parameter to be set to 0 on both the source and target servers Data is transferred from the source server to the target server over the network using encrypted SMX Connections 82

Informix Flexible Grid DDL on Secondary servers • Can now automate table management in Informix Flexible Grid DDL on Secondary servers • Can now automate table management in high-availability clusters by running Data Definition Language (DDL) statements on all servers • Can run most DDL statements such as CREATE, ALTER, and DROP on secondary servers • In previous releases, only Data Manipulation Language (DML) statements could be run on secondary servers 83

Replicate tables without primary keys • No longer require a Primary Keys for tables Replicate tables without primary keys • No longer require a Primary Keys for tables replicated by Enterprise • • Replication (ER) Use the WITH ERKEY keyword when defining tables – Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and ifx_erkey_3) – Creates a new unique index and a unique constraint that ER uses for a primary key For most database operations, the ERKEY columns are hidden – Not visible to statement like SELECT * FROM tablename; – Seen in DB-Access - Table Column information – Included in the number of columns (ncols) in the systables system catalog To view the contents of the ERKEY columns SELECT ifx_erkey_1, ifx_erkey_2, ifx_erkey_3 FROM customer; Example CREATE TABLE customer (id INT) WITH ERKEY; 84

Transaction Survival during Cluster Failover • Can now configure servers in a high-availability cluster Transaction Survival during Cluster Failover • Can now configure servers in a high-availability cluster environment to continue processing transactions after failover of the primary server – Transactions running on secondary servers are not affected – Transactions running on the secondary server that becomes the primary server are not affected – Transactions running on the failed primary server are terminated • Benefits – Reduce application development complexity • Design applications to run on any node – Reduce application maintenance • Reduce the application downtime of cleanup and restarting the application after a failover 85

Transaction Survival – Configuration • FAILOVER_TX_TIMEOUT – Maximum number of seconds the server waits Transaction Survival – Configuration • FAILOVER_TX_TIMEOUT – Maximum number of seconds the server waits before rolling – back transactions after failure of the primary server 0 • Disable transaction survival (default value) – >0 • Enable transaction survival, 60 seconds seems reasonable • On failover node, maximum time to wait for secondary nodes to reconnect before rollback • On surviving secondary node, maximum time to wait before returning error to user. (-1803/-7351). 86

Fragmentation 87 Page 87 Fragmentation 87 Page 87

Two New Fragmentation Schemes • List Fragmentation – Fragments data based on a list Two New Fragmentation Schemes • List Fragmentation – Fragments data based on a list of discrete values – Helps in logical segregation of data – Useful when a table has finite set of values for the fragment key and queries on table have equality predicate on the fragment key • Interval Fragmentation – Fragments data based on an interval (numeric or time) value – Tables have an initial set of fragments defined by a range expression – When a row is inserted that does not fit in the initial range fragments, Informix automatically creates a fragment to hold the row 88 Page 88

List Fragmentation • Fragments data based on a list of discrete values – e. List Fragmentation • Fragments data based on a list of discrete values – e. g. states in the country or departments in an organization • Table below is fragmented on column “state” – also known as fragment key or partitioning key Fragment Key List Values CREATE TABLE customer (cust_id INTEGER, name VARCHAR(128), street VARCHAR(128), state CHAR(2), zipcode INTEGER, phone CHAR(12)) FRAGMENT BY LIST (state) PARTITION p 0 VALUES ("WA", "OR", "AZ") in rootdbs, PARTITION p 1 VALUES ("CA") in rootdbs, PARTITION p 2 VALUES (NULL) in rootdbs, PARTITION p 4 REMAINDER in rootdbs; 89 Page 89

Details of Interval Fragmentation • Fragments data based on an interval (numeric or time) Details of Interval Fragmentation • Fragments data based on an interval (numeric or time) value • Table’s initial set of fragment(s) are defined by a range expression • When a row is inserted that does not fit in the initial range fragments – Informix automatically creates a fragment to hold the row – No exclusive access required for fragment addition – No DBA intervention • Purging a range can be done with a detach and drop – No exclusive access is required • If dbspace selected for the interval fragment is full or down, Informix will skip those dbspaces and select the next one in the list 90 Page 90

Example of Interval Fragmentation with Integers Fragment Key Interval Expression CREATE TABLE orders (order_id Example of Interval Fragmentation with Integers Fragment Key Interval Expression CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_id) INTERVAL( 10000 ) STORE IN (dbs 1, dbs 2, dbs 3) PARTITION p 0 VALUES < 100000 in rootdbs; List of DBSpaces Initial Value 91 Page 91

Example of Interval Fragmentation with Dates Fragment Key Interval Expression CREATE TABLE orders (order_id Example of Interval Fragmentation with Dates Fragment Key Interval Expression CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1, 'MONTH')) List of DBSpaces STORE IN (dbs 1, dbs 2, dbs 3) PARTITION p 0 VALUES < DATE('01/01/2010') in rootdbs; Initial Value 92 Page 92

Usage Example of Interval Fragmentation with Dates CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, Usage Example of Interval Fragmentation with Dates CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR) FRAGMENT BY RANGE (order_date) INTERVAL( NUMTOYMINTERVAL(1, 'MONTH')) STORE IN (dbs 1, dbs 2, dbs 3) PARTITION p 0 VALUES < DATE('01/01/2010') in rootdbs; What happens when you insert into order_date “ 07/07/2010”? A new fragment is automatically allocated for the month of July and will hold values 7/1/2010 through 7/31/2010 What happens when you insert into order_date “ 10/10/2009”? The value is insert into the existing partition p 0 93 Page 93

New Fragmentation Schemes Supported by OAT’s Schema Manager 94 Page 94 New Fragmentation Schemes Supported by OAT’s Schema Manager 94 Page 94

Update Statistics 95 Page 95 Update Statistics 95 Page 95

Update Statistics Improvements For Fragmented Tables • New finer granularity of statistics for fragmented Update Statistics Improvements For Fragmented Tables • New finer granularity of statistics for fragmented tables – Statistics are calculated at the individual fragment level – Controlled by new table property STATLEVEL • UPDATE STATITICS no longer has to scan the entire table after an ATTACH/DETACH of a fragment – Using the new: UPDATE STATISTICS FOR TABLE. . . [AUTO | FORCE] – For extremely large tables, substantial Informix resources can be conserved by updating only the subset of fragments with stale statistics • Can also specify the criteria by which stale statistics are defined – Using the new STATCHANGE property Page 96

Update Statistics Improvements • Each table/fragment tracks the number of update, deletes and inserts Update Statistics Improvements • Each table/fragment tracks the number of update, deletes and inserts • New table property statchange and statlevel – statchange • Change percentage of a table/fragment before statistics or distributions will be updated – Statlevel • Specifies the granularity of distributions and statistics • TABLE, FRAGMENT, AUTO • Fragment level statistics and distributions – Stored at the fragment level – Only fragments which exceed the statchange level are re-evaluated – Detaching or Attaching a fragment can adjust the table statistics without have to re-evaluated the entire table Page 97

Improving Update Statistics • Fragment Level Statistics – When attaching a new fragment only Improving Update Statistics • Fragment Level Statistics – When attaching a new fragment only the new fragments needs to be scanned, not the entire table – Only fragments which have expired statistics are scanned • Defining statistics expiration policies at the table level • Detailed tracking of modification to each table and fragment • Automatically skipping tables or fragments whose statistics are not expired • ANSI database implicitly commit after each index/column statistics is updated 98 Page 98

Table Level Optimizer Statistics Policies CREATE TABLE …. STATLEVEL [ TABLE | FRAGMENT | Table Level Optimizer Statistics Policies CREATE TABLE …. STATLEVEL [ TABLE | FRAGMENT | AUTO ] STATCHANGE 1. . . 100 • STATLEVEL – Defines the granularity or level of statistics created for a table • STATCHANGE – Percentage of the table modified before table/fragment statisics are considered expired 99 Page 99

Fragment Level Statistics – STATLEVEL clause • Defines the granularity of statistics created for Fragment Level Statistics – STATLEVEL clause • Defines the granularity of statistics created for a table • TABLE – Entire table is read and table level statistics stored in sysdistrib catalog • FRAGMENT – Each fragment has its own statistics which are stored in the new sysfragdist catalog • AUTO (default option for all tables) – System automatically determines the STATLEVEL – FRAGMENT is chosen if: • Table is fragmented by EXPRESSION, INTERVAL or LIST and • Table has more than a million rows • Otherwise, mapped to TABLE 10 Page 100

Fragment Level Statistics – STATCHANGE property • Threshold applied at a table or fragment Fragment Level Statistics – STATCHANGE property • Threshold applied at a table or fragment level to determine if existing statistics are considered expired • • Valid values for STATCHANGE is an integer between 0 and 100 Can be set for: – Entire server using new ONCONFIG parameter STATCHANGE (default 10%) – Session level using SET ENVIRONMENT STATCHANGE value – Table level by specifying STATCHANGE property in CREATE or ALTER TABLE statement • Order of precedence for STATCHANGE 1. 2. 3. 4. 10 Table Property Session setting ONCONFIG setting Default value (10%) Page 101

New Syntax for Update Statistics UPDATE STATISTICS FOR TABLE …. [ AUTO | FORCE New Syntax for Update Statistics UPDATE STATISTICS FOR TABLE …. [ AUTO | FORCE ] • AUTO – Only tables or fragments having expired statistic are re-calculated • FORCE – All indexes and columns listed in the command will have their statistics re-calculated • Default behavior is set by AUTO_STAT_MODE parameter – Enabled by default (i. e. AUTO) 10 Page 102

Data Warehouse 10 Data Warehouse 10

Multi Index Scan • Utilize multiple indexes in accessing a table • Example, the Multi Index Scan • Utilize multiple indexes in accessing a table • Example, the following indexes exist on a table – Index idx 1 on tab 1(c 1) – Index idx 2 on tab 1(c 2) SELECT * FROM tab 1 WHERE c 1 = 27 AND c 2 BETWEEN 77 AND 88 • The server does the following: – Index scans on idx 1 and idx 2 – Combines the results – Looks up rows satisfying both index scans • New “Skip Scan” is used • Looks like sequential scan, but only reads the required rows Page 104

OAT’s View of Multi Index Scan Page 105 OAT’s View of Multi Index Scan Page 105

Star Join for Snowflake Queries • Star join is a new query processing method Star Join for Snowflake Queries • Star join is a new query processing method • Improves query performance for star-schema queries • Requires multi-index scan and skip scan • Snowflake schema is an extension of star schema with multiple levels of dimension tables • Uses bitmap technology internally for efficient removal of unnecessary data rows • Uses pushdown technology 10 Page 106

What is a Push Down Hash Join – How it Works! • Uses alternate What is a Push Down Hash Join – How it Works! • Uses alternate parent mechanism • Requires PDQ (exchange iterator) • Discard Fact table rows early • Reduce number of Fact table rows accessed based on predicates on dimension tables • Take advantage of multiple foreign key indexes on Fact table – rely on multiindex scan • Use hashing when index is absent PDHJ XCHG pk Rest of iterator tree Dim Tab Scan 10 Fact Tab Scan Page 107

Snow Flake and Star Joins Snow Flake and Star Joins

Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Flexible Grid with Improved Availability • Easy Embeddability • Expanded Warehouse Infrastructure • Empower Applications Development • Enhanced Security Management • Increased Performance • Other Features 10 109

What are Light Scans (Recap)? • A sequential scan of large tables which read What are Light Scans (Recap)? • A sequential scan of large tables which read pages in parallel from • disk and store in private buffers in Virtual memory Advantages of light scans for sequential scans: – Bypass the overhead of the buffer pool when many pages are read – Prevent frequently accessed pages from being forced out of the buffer – pool Transfer larger blocks of data in one I/O operation (64 K/128 K platform dependent) • Conditions to invoke light scans: – The optimizer chooses a sequential scan of the table – The number of table pages > number of buffers in the buffer pool – The isolation level obtains no lock or a shared lock on the table • RTO_SERVER_RESTART automatically enables light scans • Monitor using onstat -g scn 110

Light Scan Support for All Data Types (11. 50. x. C 6) • Can Light Scan Support for All Data Types (11. 50. x. C 6) • Can now enable Informix to perform light scans on: • VARCHAR, LVARCHAR, NVARCHAR • Compressed tables • Any table with rows larger than a page • Tables now only have to be greater than 1 MB in size – Versus greater than the size of the BUFFERPOOL • Light Scan for fixed length rows already enabled • Enable: – Environment: export IFX_BATCHEDREAD_TABLE=1 – ONCONFIG file: BATCHEDREAD_TABLE 1 – Session: SET ENVIRONMENT IFX_BATCHEDREAD_TABLE ‘ 1’; 111

Light Scan Support for All Data Types (11. 70. x. C 1) • Automatic Light Scan Support for All Data Types (11. 70. x. C 1) • Automatic light scans on tables – Informix now automatically performs light scans when appropriate – No longer have to set configuration parameters to enable Informix to perform these scans • New BATCHEDREAD_INDEX configuration parameter – Enables the optimizer to automatically fetch a set of keys from an index buffer – Reduces the number of times a buffer is read 112

Light Scan Support for All Data Types (11. 70. x. C 1) • onstat Light Scan Support for All Data Types (11. 70. x. C 1) • onstat -g lsc – Displays information based on pages scanned of large data tables, when the BATCHEDREAD_TABLE configuration parameter or the IFX_BATCHEDREAD_TABLE environment option is not enabled • Note: this is depreciated • onstat -g scn – Displays the status of all light scans starting in 11. 50. FC 6 RSAM batch sequential scan info Ses. ID Thread Partnum Rowid 26 65 500002 111 c 26 66 600002 171 c 26 67 700002 141 c 26 68 800002 141 c Rows 924 1260 1092 113 Scan'd Scan Type Lock Mode Notes Buffpool Slock+Test

Other Performance Enhancements • Automated DB Scheduler tasks added to help with Performance – Other Performance Enhancements • Automated DB Scheduler tasks added to help with Performance – Timeout users that have been idle for too long” – Automatically allocate CPU VPs to match hardware/licensing – – when IDS starts Alerts for tables that have outstanding in-place Ability to configure the automatic compressing, shrinking, repacking, and defragmenting of tables and extents • Large Page Support on Linux – Previously, only AIX and Solaris systems were supported – The use of large pages can provide performance benefits in large memory configurations 114

Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Agenda • New Brand Name and Editions • Simplified Installation, Migration, and Portability • Flexible Grid with Improved Availability • Easy Embeddability • Expanded Warehouse Infrastructure • Empower Applications Development • Enhanced Security Management • Increased Performance • Other Features 11 115

Prevent accidental disk initialization of an instance • FULL_DISK_INIT configuration parameter – Specifies whether Prevent accidental disk initialization of an instance • FULL_DISK_INIT configuration parameter – Specifies whether or not the disk initialization command (oninit -i) – – be executed in an Informix instance when a page zero exists at the root path location Prevents accidental initialization of an instance or another instance when the first page of the first chunk (page zero) exists at the root path location Page zero, which is created when Informix is initialized, is the system page that contains general information about the server • Values – 0 – 1 • The oninit -i command runs only if there is not a page zero at the root path location • The oninit -i command runs under all circumstances • Also resets the FULL_DISK_INIT configuration parameter to 0 after the disk initialization 116

Tool for collecting data for specific problems • New ifxcollect tool to collect diagnostic Tool for collecting data for specific problems • New ifxcollect tool to collect diagnostic data if necessary for troubleshooting • • a specific problem – Such as an assertion failure Can also specify options for transmitting the collected data via the File Transfer Protocol (FTP) Located in the $INFORMIXDIR/bin directory Output files located in the $INFORMIXDIR/isa/data directory Examples – To collect information for a general assertion failure ifxcollect –c af –s general – To collect information for a performance problem related to CPU utilization ifxcollect –c performance –s cpu – To include FTP information, specify the additional information -f -e [email protected]_name. org -p 9999. 999 -f -m machine -l /tmp -u user_name -w password 117

Backup to Cloud – Overview • Support for backup of Informix data to Amazon Backup to Cloud – Overview • Support for backup of Informix data to Amazon Simple Storage Service (S 3) cloud storage system and restore from it by using ontape backup and restore utility • Benefits – Simplifies the process of Informix data backup to an off-site S 3 storage location, which can be accessed from anywhere on the web – Scalable storage capacity to match the growth in Informix backup data (within backup object size limit imposed by S 3) – Reliable storage system through SLA provided by S 3 – Pay-as-you-go model can provide cost-effective Informix backup solution 118

Backup to Cloud – How Backup works? 1. ontape backs up the data to Backup to Cloud – How Backup works? 1. ontape backs up the data to a file in local directory 2. ontape starts the Cloud Client and waits for it to finish 3. The Cloud Client transfers the backup file from local directory to S 3 4. The Cloud Client returns its execution status back to ontape, for ontape to finish running 5. ontape starts the Cloud Client and waits for it to finish 6. The Cloud Client retrieves backup file from S 3 into local directory 7. ontape restores the server from the local file 119

Websphere MQ 120 Websphere MQ 120

Websphere MQ Shipping Application Informix Dynamic Server recvdnotify Inventory shippingreq MQ Functions Transaction mgmt Websphere MQ Shipping Application Informix Dynamic Server recvdnotify Inventory shippingreq MQ Functions Transaction mgmt Credit processing Functions to: • Send • Receive • Publish • Subscribe • Abstract Use a virtual table to map a queue to a table Send and receive via INSERT and SELECT Send strings, documents (CLOB/BLOB) creditque Prior to IDS Panther (11. 50 and earlier) Order Entry Application Simplified Interface SQL based program SQL based MQ access 2 -phase commit 121

Shipping Application Websphere MQ Oracle shippingreq DB 2 Websphere MQ Inventory shippingreq Websphere MQ Shipping Application Websphere MQ Oracle shippingreq DB 2 Websphere MQ Inventory shippingreq Websphere MQ IDS Credit processing IDS shippingreq Support distributed topology for IDS and Websphere MQ • Server based MQ messaging • Client based MQ messaging Support multiple Queue Managers within a single transactio New Functions WITH MQ Enhancements in Informix 11. 70 Order Entry Application Simplified Interface SQL based program SQL based MQ access 2 -phase commit 122

12 123 12 123