Wednesday, June 25, 2014

Free seminar in Tallinn, Estonia: "Developing modern applications using MySQL" with Ronald Bradford

Fre MySQL seminar on 27. august 2014 @ 13:00. Announcement by Oracle User Group Estonia:

Developing modern applications using MySQL.

In this seminar series learn how to best use MySQL for your existing and new development requirements with leading MySQL expert and Oracle Ace Director Ronald Bradford.

These presentations provide a detailed review of the essential lifecycle components for developing a successful software application and offer a checklist for your company to review the design, development, deployment and support of your business applications with MySQL.

The presentations include:

  • Effective MySQL Architecture and Design Practices
  • Effective Software Development with MySQL
  • Effective Web Site Operations
  • Upcoming MySQL features for modern applications

Detailed description about the topics: read here.

More information about Ronald Bradford:

http://ronaldbradford.com/
https://en.wikipedia.org/wiki/Ronald_Bradford
https://apex.oracle.com/pls/apex/f?p=19297:4:::NO:4:P4_ID:1820

To attend this event, PLEASE REGISTER!

This event is organized by Oracle User Group Estonia in cooperation with Finnish, Swedish and Latvian user groups.

The event in Tallinn is sponsored by TransferWise.

If you require more information about this event, please contact ouge@ouge.eu

Tuesday, February 18, 2014

Oracle User Group Estonia meetup and other speaking arrangements

I know it is a little late announcement, but Oracle User Group Estonia is having its first meetup after many years tonight. I'm presenting there "Making MySQL highly available with Oracle Grid Infrastructure".
More info and registration here: http://www.meetup.com/Oracle-User-Group-Estonia/events/165539962/.

So if anyone is in Tallinn today, then join us!

I'm also speaking at Oracle User Group Norway 2014 Spring conference (3-5 April 2014), topic "Making MySQL highly available with Oracle Grid Infrastructure". It will be a great conference!
More info here

Tuesday, April 23, 2013

Sample code: Using Datapump API for metadata and data filtering

I was looking for PL/SQL examples to use metadata and data filtering in datapump API, but I didn't find any. So here is one example. It uses table reload_dev_tables to specify what schemas/tables should be exported using data pump and what where clause should be set.

Structure for reload_dev_tables:

 Name                                      Null?    Type                        
 ----------------------------------------- -------- -------------
 OWNER                                     NOT NULL VARCHAR2(40)                
 TABLE_NAME                                NOT NULL VARCHAR2(40)                
 IS_FULL                                   NOT NULL NUMBER(1)                   
 FILTER_PREDICATE                                   VARCHAR2(250)   

Here is the datapump code itself, tested in 11.2.0.3. This is just a demonstration how to use the datapump API, specifically the metadata and data filters.

  PROCEDURE export_data(p_directory IN VARCHAR2) IS
    CURSOR c_norows IS
      select owner, table_name from dba_tables WHERE owner in (
        select distinct owner from reload_dev_tables) and status='VALID' and temporary = 'N' 
        and secondary = 'N' and nested = 'NO' and dropped = 'NO' and iot_name is null
      minus
      select owner, table_name from reload_dev_tables;
    l_dp_handle       NUMBER;
    l_last_job_state  VARCHAR2(30) := 'UNDEFINED';
    l_job_state       VARCHAR2(30) := 'UNDEFINED';
    l_sts             KU$_STATUS;
    s varchar2(3000);
  BEGIN
    l_dp_handle := DBMS_DATAPUMP.open(
      operation   => 'EXPORT',
      job_mode    => 'SCHEMA',
      );

    DBMS_DATAPUMP.add_file(
      handle    => l_dp_handle,
      filename  => 'dev_dw.dmp',
      directory => p_directory);

    DBMS_DATAPUMP.add_file(
      handle    => l_dp_handle,
      filename  => 'dev_dw.log',
      directory => p_directory,
      filetype  => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE);

    --
    SELECT listagg(''''||owner||'''', ', ') WITHIN GROUP (ORDER BY owner) INTO s
    FROM (SELECT DISTINCT owner FROM dba_tables WHERE owner IN (SELECT distinct owner from reload_dev_tables));
    DBMS_DATAPUMP.metadata_filter(l_dp_handle, 'SCHEMA_LIST', s);
     
    -- Add query filters
    FOR rec IN (SELECT owner, table_name, filter_predicate FROM reload_dev_tables r 
        WHERE filter_predicate IS NOT NULL AND 
        EXISTS (SELECT 1 FROM dba_tables t WHERE t.owner = r.owner AND t.table_name = r.table_name 
          AND t.dropped = 'NO') ) LOOP
      DBMS_DATAPUMP.DATA_FILTER (
        handle => l_dp_handle,
        name  => 'SUBQUERY',
        value => 'WHERE '||rec.filter_predicate,
        table_name => rec.table_name,
        schema_name => rec.owner);
    END LOOP;
    -- Add tables without rows
    FOR rec IN c_norows LOOP
      DBMS_DATAPUMP.DATA_FILTER (
        handle => l_dp_handle,
        name  => 'INCLUDE_ROWS',
        value => 0,
        table_name => rec.table_name,
        schema_name => rec.owner);
    END LOOP;
   
    DBMS_DATAPUMP.start_job(l_dp_handle);

    DBMS_DATAPUMP.detach(l_dp_handle);
  END;

Thursday, April 4, 2013

Exploring DBVisit Replicate

There are a few Oracle database replication solutions on the market:

  • Oracle Streams (powerful, included with RDBMS license (Oracle SE has trigger-based capture, EE mines redo logs and log buffer), but deprecated - no longer developed, complex to manage)
  • Oracle GoldenGate (powerful, but very expensive)
  • Tungsten (heterogeneous, but from Oracle side requires deprecated CDC and complex to set up - one interesting feature, you can write data modification plugins before data is applied on target)
  • DBVisit (pretty inexpensive compared to GoldenGate, but powerful)
.

In this blog post I'll give a short overview of DBVisit Replicate, that can be used to replicate data real time between two Oracle databases or from Oracle to MySQL/MSSQL. I am not connected to DBVisit company in any way and I was testing their replication solution for a client of mine.

A few interesting key concepts behind DBVisit Replicate:

  • It uses optimistic apply on the target side, meaning that data changes are replicated and applied (but not committed) to the target even before transaction is committed on the source. In case of rollback, the target database needs to roll back all the changes too. Positive side is that committed transactions get replicated to the target faster, even if the transaction is large.
  • DBVisit uses its own change capture process to mine online redo logs, so it does not depend on triggers to log the changes and does not impact the end user session. The potential downside - Oracle can change the internal structure of redo logs any time, so before upgrading the database check the DBVisit compatibility first.
  • DBVisit can run its CPU intensive processing on a different server, so it does not waste expensive CPU cycles on the Oracle DB server. This is called 3-tier architecture in DBVisit. In this architecture source database only needs to run small FETCHER process, that sends redo log changes over a network to a dedicated MINE process/server that actually does to log processing. MINE filters out the required database changes and sends this processed information over network to APPLY process. APPLY then connects to the target database over OCI (so it does not need to be running on the target database server) and executes the DML statements. (Note: fetcher process is optional, so by default dbvisit runs mine process on the source database server).

DBVisit is very easy to install and it supports RAC and ASM. My setup is done on 11.2.0.3 3-node RAC+ASM running on Oracle Linux 5.8. For Grid Infrastructure (ASM) role separation is in use (GI runs under different OS account than RDBMS). I'm using DBVisit Replicate 2.4.21 (unreleased currently, but it contains a many bug fixes needed for my environment).

In my following easy test setup:

  • I'm using the default 2-tier architecture, so no fetcher process. Apply also runs in the same host as the target database.
  • I'm using TAR version of the dbvisit software (not RPM), so I could have a single shared copy of the software for all servers in the configuration. If you use RPM, then the same RPM package needs to be installed on all servers (and you need root privileges). In my case I'm using OCFS2 filesystem and dbvisit software is extracted to /u02/app/oracle/dbvisit.
  • For processing area for each dbvisit process I'm using /u03/dbvisit/pte in this example. In my current case it is also on an OCFS2 filesystem and shared between all servers, but it does not have to be and when I move this setup to production, I'll also use 3-tier architecture and local disks.
  • Grid Infrastructure and ASM run under OS account grid.
  • I'm using IP 10.0.0.1 as the server address where MINE is running.
  • I'm using IP 10.0.0.2 as the server address where APPLY is running.

First execute the only executable file in dbvisit replicate installation package dbvrep and execute and complete the initialization wizard.

[oracle@jfadboc1n02.jfa.unibet.com pte]$ /u02/app/oracle/dbvisit/replicate/dbvrep
Initializing......done
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
No DDC file loaded.
Run "setup wizard" to start the configuration wizard or try "help" to see all commands available.

dbvrep> setup wizard
This wizard configures Dbvisit Replicate to start a replication process.

The setup wizard creates configuration scripts, which need to be run after the wizard ends. No changes to the databases are made before that.

The progress is saved every time a list of databases, replications, etc. is shown. It will be re-read if wizard is restarted and the same DDC name and script path is
selected.

           Run the wizard now? [yes] yes

           Accept end-user license agreement? (view/yes/no) [view] yes

Before starting the actual configuration, some basic information is needed. The DDC name and script path determines where all files created by the wizard go (and where
to reread them if wizard is rerun) and the license key determines which options are available for this configuration.

           (DDC_NAME) - Please enter a name for this replication (suggestion: use the name of the source database): [] pte

           (LICENSE_KEY) - Please enter your license key (or just enter "(trial)"): [(trial)] trial

           (SETUP_SCRIPT_PATH) - Please enter a directory for location of configuration scripts on this machine: [/home/oracle/pte] /u03/dbvisit/pte
... and so on. In the end the wizard will execute a script on both source and target databases that will create a DBVREP schemas and give it all necessary privileges. If you enabled DDL replication, then it will also enable database wide supplemental logging on the source database side (so check DBA_2PC_PENDING view before doing it, that you don't have any pending 2PC transactions open, otherwise adding supplemental logging will hang until the 2PC transactions are resolved).

MINE (or FETCH in case of 3-tier architecture) process needs to run directly on the source database server (in case of RAC pick any one of the database nodes) and under the same OS account as ASM, so in my case grid. Setup wizard creates a script *-run-10.0.0.1.sh to start MINE.

[grid@xxxxxx pte]$ ./pte-run-10.0.0.1.sh
Initializing......done
DDC loaded from database (234 variables).
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
DDC file /u03/dbvisit/pte/pte-MINE.ddc loaded.
Starting process MINE...started

Apply process shouldn't need an installed Oracle client software, because DBvisit Replicate comes with an embedded Oracle Instantclient. In the version I'm currently using this did not work for me, so I needed to add the following line to *-APPLY.ddc file to set the correct ORACLE_HOME. But this bug should be fixed in the next released version.

memory_set ORACLE_HOME /u01/app/oracle/product/11.2.0.3/db

Also open *-run-10.0.0.2.sh (the script that executes APPLY process) and set NLS_LANG on the first line. NLS_LANG needs to be AMERICAN_AMERICA.SOURCE_DB_CHARSET:

export NLS_LANG=AMERICAN_AMERICA.AL32UTF8

Now start apply process:

[oracle@xxxxxxxxxx pte]$ ./pte-run-10.0.0.2.sh
Initializing......done
DDC loaded from database (234 variables).
Dbvisit Replicate version 2.4.21.2746
Copyright (C) Dbvisit Software Limited.  All rights reserved.
DDC file /u03/dbvisit/pte/pte-APPLY.ddc loaded.
Starting process APPLY...started

Monitoring and configuring the replication process is done through the replication console, which can be executed using the start-console.sh script. This will display the status of all dbvisit processes and limited list of tables that have had some changes replicated recently. From this command line you can control the replication process

/MINE IS running. Currently at plog 13 (redo sequence 1201 [1] 1395 [3] 1086 [2]) and SCN 96447934933 (04/04/2013 16:49:25).
APPLY IS running. Currently at plog 13 and SCN 96447934644 (04/04/2013 16:49:25).
Progress of replication pte:MINE->APPLY: total/this execution
--------------------------------------------------------------------------------------------------------------------------------------------
DBAUSER.DBVISIT_PING:         100%  Mine:21/21           Unrecov:0/0         Applied:21/21       Conflicts:0/0       Last:04/04/2013 18:24:36/OK
--------------------------------------------------------------------------------------------------------------------------------------------
1 tables listed.

dbvrep>

Some useful commands: LIST PREPARE, PREPARE SCHEMA, PREPARE TABLE, UNPREPARE SCHEMA, UNPREPARE TABLE, SHUTDOWN MINE, SHUTDOWN APPLY, SHUTDOWN ALL, LIST CONFLICT. Before you add (prepare) new tables/schemas with existing data to replication configuration, take a look at the users guide for a proper procedure. If you just execute PREPARE TABLE/SCHEMA and then export the existing data, you will get ORA-01466.

For my current project, it was very important to find a replication solution that could exclude some transactions from replication, for example when you need to purge data from source database but want to keep the same data on the target DB. It is possible with DBVisit Replicate:

  • Partition level DDL is not replicated by default, so on the source database you can just drop/truncate a partition and it will not be replicated by default to the target side.
  • If you need to exclude specific transactions from replication, then execute SET TRANSACTION NAME as a first command in that transaction. SET TRANSACTION NAME 'DBREPL_DB_%s_XID_%s'
    The first %s: name of the target database (as configured in the setup wizard).
    The second %s: is not relevant.
    COMMIT; -- just to be sure that the next command is the first in that transaction
    SET TRANSACTION NAME 'DBREPL_DB_archpte_XID_XXXXX';
    DELETE FROM dbauser.dbvisit_ping;
    COMMIT;
    

I think this is enough for a first post. You can do a lot of complex configurations with DBVisit, it is a flexible product. Test your setup properly, there can be issues depending on your database setup, that DBVisit has not yet tested for. If you find an issue, report to DBVisit support (this can also be done if you have a trial license), DBVisit has an excellent and fast support team. So far I have created 7 tickets to DBVisit support and all of them have been resolved within hours or a day.

DBVisit also has some helpful videos in youtube.

Monday, June 25, 2012

Binding IN-lists as comma-separated values

One link that I have to send to developers quite frequently is how to use XMLTABLE in SQL queries to bind comma separated list of values instead of generating large IN list directly to the query (and this way avoid new sqlid/cursor/wasted memory for each different value combination provided). The link that I usually send is this, but in this post I'd like to expand it a little, so it would work even when the string contains special XML characters.

For numbers, the usage is simple:

> var num_list varchar2(100)
> exec :num_list := '2668,2669,2670'

PL/SQL procedure successfully completed.

> SELECT id FROM ath_case WHERE id IN (
 SELECT (column_value).getNumberVal() FROM xmltable(:num_list)
 );

        ID
----------
      2668
      2669
      2670

> exec :num_list := '2671,2672,2673,2674'

PL/SQL procedure successfully completed.

> SELECT id FROM ath_case WHERE id IN (
 SELECT (column_value).getNumberVal() FROM xmltable(:num_list)
 );

        ID
----------
      2671
      2672
      2673
      2674

If the binded list consists of strings, then some extra steps are needed - the comma-separated has to be enclosed with double-quotes and the values have to be XML-encoded (XML special characters, like " replaced with codes).

> var str_list varchar2(100)
> exec :str_list := '"GI1","BI1"'

PL/SQL procedure successfully completed.

> SELECT u.first_name FROM ath_user u 
 JOIN ath_team t ON u.id = t.manager_id 
 WHERE t.name IN (
 SELECT DBMS_XMLGEN.CONVERT((column_value).getStringVal(), 1) FROM xmltable(:str_list)
 );

FIRST_NAME
-----------
Riho
Kaur

> exec :str_list := '"OS1","OS2"'

PL/SQL procedure successfully completed.

> SELECT u.first_name FROM ath_user u 
 JOIN ath_team t ON u.id = t.manager_id 
 WHERE t.name IN (
 SELECT DBMS_XMLGEN.CONVERT((column_value).getStringVal(), 1) FROM xmltable(:str_list)
 );

FIRST_NAME
-----------
Markko
Aive

> set define off
> exec :str_list := '"value1","value2","value " with quot","value & with amp"';

PL/SQL procedure successfully completed.

> SELECT DBMS_XMLGEN.CONVERT((column_value).getStringVal(), 1) FROM xmltable(:str_list);

DBMS_XMLGEN.CONVERT((COLUMN_VALUE).GETSTRINGVAL(),1)
-------------------------------------------------------------------------
value1
value2
value " with quot
value & with amp