Showing posts with label Oracle 11g. Show all posts
Showing posts with label Oracle 11g. Show all posts

01 September 2016

RegExp: Constraint to prevent spaces at the beginning or end.

Even though a space is a regular character, the client didn't want spaces at the beginning or end of a string. Any spaces in the middle were fine.
Of course this could be handled by the application, but it must also be implemented in the database. Using a check constraint with a regular expression will prevent the end user from entering unwanted data.

To try things out, let's just start with a simple table with a single column.

   create table test
   (str varchar2(10));

Now the magic part: add a check constraint using a regular expression

   alter table test
   add constraint str_chk check (
      regexp_like (str, '^[^[:space:]].+[^[:space:]]$')
    );

The regular expression reads: The string should start (the first caret) with any character which is not in the character class of [:space:], followed by one or more characters (the period) and it should end with (the dollar) any character as long as it's not in the character class of [:space:].

UPDATE - 07-Sept-2016

The expression that is used in the above constraint will also prevent from a single allowed character or double character to be entered. This omission has been corrected by David Grimberg. The correct regular expression should be:

^[^[:space:]](.*[^[:space:]])?$

To test the constraint, the following insert statement were used.

   insert into test values ('hello');
   insert into test values ('hel lo');
   -- Not allowed:
   --   starting with a space
   insert into test values (' hello');
   --   ending with a space
   insert into test values ('hello ');
   --   just a space
   insert into test values (' ');
   --   multiple spaces
   insert into test values ('   ');
   --   Tab
   insert into test values (chr(9));
   --   Line feed
   insert into test values (chr(10));
   --   Carrige Return
   insert into test values (chr(13));

18 May 2016

Rounding amounts, divide cents over multiple lines

In previous articles I wrote about dealing with a missing cent when you need to divide a certain amount over multiple lines. In these articles, links are at the bottom, I described a method to calculate the difference on the last row.
Then a question arose (as a comment):
What if for example i have 42 records and i wish to divide 100 by 42. I would get a rounded value of 2.38. If i multiply this by 42 it would amount to just 99.96. What if want to spread the .04 difference on 4 records, not just the last record. In effect i'll be having 4 records with 2.39. Right now i'm doing this via cursor. Im kinda hoping i can do this using sql or analytic functions
Let's create a table first, called T:
create table t
as
select rownum id
  from dual
 connect by level <= 42
In order to determine the number of rows, we need three pieces of information:
  1. The amount that we need to divide
  2. The total number of rows in the set
  3. The difference between the rounded amount and the amount that we need to divide
 select id
      ,100 amount
      ,count(*) over () entries
      ,round (100 / count(*) over (), 2) rounded
    from t
To calculate how many rows will have the extra cent added or subtracted, the following formula is used.
abs (amount - (entries * rounded)) * 100
When you want the rounding done on the "first" rows, a simple CASE expression can be used to mark the rows
case 
when rownum <= abs ((amount - (entries * rounded)) * 100)
then 'x'
end as indicator
The query now looks like the following, with the first ten rows:
with amounts
as
( select id
      ,100 amount
      ,count(*) over () entries
      ,round (100 / count(*) over (), 2) rounded
    from t
)
select id
      ,amount
      ,entries
      ,rounded
      ,case 
       when rownum <= abs ((amount - (entries * rounded)) * 100)
       then 'x'
       end as indicator
 from amounts
;
 ID     AMOUNT  ENTRIES    ROUNDED F
---------- ---------- ---------- ---------- -
  1   100       42       2.38 x
  2   100       42       2.38 x
  3   100       42       2.38 x
  4   100       42       2.38 x
  5   100       42       2.38
  6   100       42       2.38
  7   100       42       2.38
  8   100       42       2.38
  9   100       42       2.38
 10   100       42       2.38
The last piece of the puzzle is to determine if we need to add or subtract the cent. Using the SIGN function is an easy way to determine this.
sign (amount - (entries * rounded)) as pos_neg
Putting everything together will give you the following query (with the first 10 rows)
with amounts
as
( select id
      ,100 amount
      ,count(*) over () entries
      ,round (100 / count(*) over (), 2) rounded
    from t
)
,indicators as 
(
select id
      ,amount
      ,entries
      ,rounded
      ,case 
       when rownum <= abs ((amount - (entries * rounded)) * 100)
       then 'x'
       end as indicator
      ,sign (amount - (entries * rounded)) as pos_neg
 from amounts
)
select id
      ,rounded +
       (nvl2 (indicator, 0.01, 0) * pos_neg) final_amount
  from indicators
;
 ID FINAL_AMOUNT
---------- ------------
  1    2.39
  2    2.39
  3    2.39
  4    2.39
  5    2.38
  6    2.38
  7    2.38
  8    2.38
  9    2.38
 10    2.38

And there it is. The rounding is divided over the first four rows.


Don't want to use the first rows, but the last rows instead? Use the following expression to set the indicator
case 
when rownum >
 case sign ((amount - (entries * rounded)))
   when -1 then entries - abs ((amount - (entries * rounded)) * 100)
   else entries - (amount - (entries * rounded)) * 100
   end
then 'x'
end as indicator

Links

07 August 2015

Rounding Amounts, the missing cent: with the MODEL clause

Rounding amounts may lead to rounding-issues, I have written how this may be resolved in a previous blogpost using some analytic functions.
Playing around a little bit, I also came up with a method to resolve the rounding difference with the MODEL clause.

To create an example, first let's create a table with only three records in it.

   SQL> create table t
     2  as
     3  select rownum + 42 id
     4    from dual
     5  connect by level <= 3
     6  ;

   Table created.
   SQL> select *
      2    from t
      3  /

                ID
        ----------
                43
                44
                45

In the code below the DIMENSION is the ID from the table. The measures (the "cells" that we can manipulate in the MODEL clause) are used as follows

Amount The amount that is the result of the division per record
Total The amount that we want to divide over the records
Diff Placeholder for the difference between the sum of the rounded amounts and the amount that needs to be divided.
Indicator We need to know the highest value for the DIMENSION (to add the difference to). Since we can't determine the highest value of the DIMENSION, this MEASURE is used for that.
When we execute the query we get the following results, except for the DIMENSION column (ID) and the MEASURE Indicator all columns are empty.
      SQL> select *
        2    from t
        3  model
        4  dimension by (id)
        5    measures (cast (null as number) amount
        6    ,cast (null as number) total
        7    ,cast (null as number) diff
        8    ,id indicator
        9    )
       10  rules (
       11  )
       12  /

       ID           AMOUNT      TOTAL       DIFF  INDICATOR
      ---------- ---------- ---------- ---------- ----------
              43                                          43
              44                                          44
              45                                          45
It is time to add some rules to the MODEL clause.
The first rule is to add the amount which needs to be divided.
total [0] = 100   
This will add a "magic" row to the resultset with Dimension 0 where the measure column total is filled with 100, the amount that we want to divide.
The reason I call it a "magic" row is, is because it is not in the table and made up. In this row I will store some values that I need to do my calculations and such.
The second rule is
 indicator [0] = max (indicator) [any]
 
In this rule I will determine which row is the "last" row, the one with the highest ID.
Next rule is to do the actual calculation: divide the amount by the number of rows in the resultset. Of course don't count the "magic" row, hence the condition id > 0.
 amount[id > 0] = round (total[0]/(count(*)[id > 0]), 2)
 
To determine the total of the rounded values, we will use the following rule:
 amount[0] = sum (amount)[id > 0]
 
This total amount is also placed on the "magic" row.
Calculating the difference between the amount that we want to divide and the actual divided amount is done in the following rule:
 diff[0] = total[cv()] - amount[cv()]
 
The difference is added to the "last" row in the last rule:
 amount[indicator[0]]  = amount[cv()] + diff[0]
 
To see the complete query in action:
   SQL> select *
     2    from t
     3  model
     4  dimension by (id)
     5    measures (cast (null as number) amount
     6    ,cast (null as number) total
     7    ,cast (null as number) diff
     8    ,id indicator
     9    )
    10  rules (
    11    total [0] = 100
    12   ,indicator [0] = max (indicator) [any]
    13   ,amount[id> 0] = round (total[0]/(count(*)[id>0]), 2)
    14   ,amount[0] = sum (amount)[id>0]
    15   ,diff[0] = total[cv()] - amount[cv()]
    16   ,amount[indicator[0]]  = amount[cv()] + diff[0]
    17  )
    18  /

       ID     AMOUNT      TOTAL       DIFF  INDICATOR
   ---------- ---------- ---------- ---------- ----------
           43      33.33                               43
           44      33.33                               44
           45      33.34                               45
            0      99.99        100        .01         45
   
As you can see in the output above the values are rounded (in the AMOUNT column) and the last row takes the difference.
But also our "magic" row is added to the output, to filter that one out of the resultset simply add a where clause.
      SQL> select id
        2      ,amount
        3    from (select *
        4    from t
        5  model
        6  dimension by (id)
        7    measures (cast (null as number) amount
        8    ,cast (null as number) total
        9    ,cast (null as number) diff
       10    ,id indicator
       11    )
       12  rules (
       13    total [0] = 100
       14   ,indicator [0] = max (indicator) [any]
       15   ,amount[id> 0] = round (total[0]/(count(*)[id>0]), 2)
       16   ,amount[0] = sum (amount)[id>0]
       17   ,diff[0] = total[cv()] - amount[cv()]
       18   ,amount[indicator[0]]  = amount[cv()] + diff[0]
       19  ))
       20  where id> 0 order by id
       21  /

              ID     AMOUNT
      ---------- ----------
              43      33.33
              44      33.33
              45      33.34
      

Links

02 July 2015

Conditional Compilation and Static Boolean

One of my pet-projects is LoggerUtil, which is a utility for Logger, which is an excellent logging tool for PL/SQL.
This post is not about Logger, but some dealings with Conditional Compilation.

With Conditional Compilation you can create a single code base to handle different functionalities depending on compiler flags.
The latest addition to LoggerUtil was a method to create a custom template. For this to work, LoggerUtil depends on a certain Logger Release (where issue #103 is implemented). The dependency lies in the fact that the custom template is stored in the LOGGER_PREFS table and before issue #103 was resolved there was no way to add data to the LOGGER_PREFS table (or at least not a supported way).

Conditinal Compilation is just what the doctor ordered. With a Conditional Compilation directive you can check if Logger is at least version 3, so we can have a supported way of writing into the LOGGER_PREFS table. Sounds easy enough.

And this is where I made some discoveries about Conditional Compilation.

Let's begin with a package specification with only CONSTANTS in there.

      create or replace package constants_pkg
      is
         version   constant varchar2(10) := '1.2.3';
         major_num constant number := 1;
         major_int constant pls_integer := 1;
         major_vc  constant varchar2(1) := 'a';
      end constants_pkg;
   
There are a few variations in there, starting with the current method that Logger has implemented the version number (the constant called VERSION).
Second there is a NUMBER constant.
Third is an PLS_INTEGER constant.
Fourth a variation to the first constant, just one character.

Following is a procedure, called conditional (how appropriate):

      create or replace
      procedure conditional
      is
      begin
         $if constants_pkg.version like '1%'
         $then
            dbms_output.put_line ('string, with LIKE comparison');
         $end
         dbms_output.put_line ('This will always be displayed');
      end conditional;
   
The $IF, $THEN, $END are part of the syntax used for Conditional Compilation.
On line 5 the packaged constant is checked if the string start with a 1. When it does, line 7 is included in the compiled code. If the packaged constant doesn't start with a 1 then line 7 is not included in the compiled code.
You might say: "Should you do a comparison like this"
      $if to_number (substr (constants_pkg.version, 1, 1)) > 1
   
and you would be right, but... for this example it doesn't matter as both don't work. When you try to compile the code, you will see the following error:
Errors for PROCEDURE CONDITIONAL:

LINE/COL ERROR
-------- -----------------------------------------------------------------
4/8  PLS-00174: a static boolean expression must be used
   

So my next attempt at getting this to work, was using the full version constant:

      $if constants_pkg.version = '1.2.3'
   
With the same results, the same compilation error.

What about just a single character string?

      $if constants_pkg.major_vc = '1'
   
...Nope, again the same compilation error.

Next up, try a NUMBER constant instead:

      $if constants_pkg.major_num = 1.0
   
I thought the ".0" at the end could make a difference, but alas.. same compilation error.

Last attempt: the PLS_INTEGER:

      $if constants_pkg.major_int = 1
   
This may not come as a surprise now, but this works. :D
This is similar to the way that Oracle does it itself.

When you want to know which release of the Oracle database you are on, you can check DBMS_DB_VERSION. There are constants defined in DBMS_DB_VERSION which you can use with Conditional Compilation.

So Martin, if you are still reading: Can I have the version as a PLS_INTEGER, please?

Links to related articles

  1. Speed Up Development with Logger
  2. Create Custom Template with LoggerUtil
  3. DBMS_DB_VERSION

11 June 2015

Deadlock with a Virtual Column

Update: There is already a bug filed for this issue, it is registered under number: 22591494

Virtual Columns are really cool. I like them a lot. If you've never heard of them, shame on you, learn about them.
In short: a Virtual Column is not a real column, it's an expression that looks like a column... more or less.
While using the Virtual Columns, we ran into a little oddity with them.

First of all let's start with the version of the database that I tested this on. Yes, I know it's an 11 database that's because the client is still running on this release.
These tests were run on the Virtual Box image that is provided by Oracle.
I still need to run these tests on Oracle 12c.
I just ran the script on my Oracle 12c database (in a PDB) and the same deadlock occurs.

   SQL> select *
     2    from v$version
     3  /

   BANNER
   ----------------------------------------------------------------------
   Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
   PL/SQL Release 11.2.0.2.0 - Production
   CORE 11.2.0.2.0 Production
   TNS for Linux: Version 11.2.0.2.0 - Production
   NLSRTL Version 11.2.0.2.0 - Production

The setup for this test is based on a copy of the EMP table.

SQL> create table emp
  2  as
  3  select *
  4    from scott.emp
  5  /

Table created.

To create a Virtual Column on my copy of the EMP table, I need a deterministic function.
This function takes two arguments, one for the ENAME and one for the EMPNO. And what does the function do? Actually nothing, it returns NULL.

   SQL> create or replace
     2  function vc
     3    (p_ename in emp.ename%type
     4    ,p_empno in emp.empno%type
     5    )
     6   return varchar2 deterministic
     7  is
     8  begin
     9   return null;
    10  end vc;
    11  /

   Function created.

The function needs to be deterministic because that is required when you want to define a Virtual Column.
Now we can add the Virtual Column (called VC) to my copy of the EMP table.

SQL> alter table emp
  2  add descr as (vc (ename, empno))
  3  /

Table altered.

So far, no problems. It all works.
The trouble began when you execute a TRUNCATE TABLE statement.

   SQL> truncate table emp
     2  /
   truncate table emp
          *
   ERROR at line 1:
   ORA-04020: deadlock detected while trying to lock object ALEX.EMP
   

To be honest, this is not the first deadlock that I created and it probably won't be the last :)
The snag with deadlocks is trying to figure out what caused it in the first place.
Of all things that I thought would happen, a deadlock is not one of them.
How can it? It is my own personal VirtualBox and I am the only one using it.

The first step investigating a deadlock is usually the alert.log, however there was nothing in it regarding the deadlock... honest.

After a bit of googling, I found a note on deadlocks by Yong Huang (link at the bottom) describing the causes of deadlocks.
In that article he points out that you can get more insight if you set a certain event, and that's what I did.

   SQL> alter session set events '4020 trace name processstate forever, level 10'
  2  /
  
Session altered.
   
To find out where the trace file was located, and the name of it, I used a query by Tanel Poder (link at the bottom).
      SQL> select value ||'/'||(select instance_name from v$instance) ||'_ora_'||
     2 (select spid||case when traceid is not null then '_'||traceid else null end
     3       from v$process where addr = (select paddr from v$session
     4       where sid = (select sid from v$mystat
     5           where rownum = 1
     6      )
     7         )
     8 ) || '.trc' tracefile
     9* from v$parameter where name = 'user_dump_dest'
   SQL> /

   TRACEFILE
   -------------------------------------------------------------------------------------------------
   /home/oracle/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3791.trc
   

In that trace file was the following information:

   A deadlock among DDL and parse locks is detected.
   This deadlock is usually due to user errors in
   the design of an application or from issuing a set
   of concurrent statements which can cause a deadlock.
   This should not be reported to Oracle Support.
   The following information may aid in finding
   the errors which cause the deadlock:
   ORA-04020: deadlock detected while trying to lock object ALEX.EMP
   --------------------------------------------------------
    object   waiting  waiting       blocking blocking
    handle   session     lock mode   session     lock mode
   --------  -------- -------- ----  -------- -------- ----
   0x31ae0d7c  0x3aab3cf4 0x31afc4c0    X  0x3aab3cf4 0x31aeb718    S
   
As you can see in the text (taken from the trace file), you can see that the sessions involved in the deadlock is the same, both the waiting and the blocking session are 0x3aab3cf4.

So at least my assumptions were correct, I was blocking myself.
Not that it got me any further...

After quite a long time fiddling around, I discovered the following.
If I change the function like below, the deadlock doesn't occur. See if you can spot the difference.

   SQL> create or replace
  2  function vc
  3    (p_ename in varchar2
  4    ,p_empno in number
  5    )
  6   return varchar2 deterministic
  7  is
  8  begin
  9   return null;
10  end vc;
11  /

Function created.

SQL> truncate table emp
  2  /

Table truncated.

Did you spot the difference?
The function at first used anchored datatypes for the arguments (%TYPE) and later on just a simple type (NUMBER and VARCHAR2).
Using the simple types, the truncate works.
There are some oddities when it exactly occurs and I haven't figured out yet when the deadlock occurs exactly. It seems that when an argument is anchored (%TYPE) and the underlying datatype is a NUMBER, the deadlock occurs...
Like I said I haven't really figured out what causes it.

Links

  1. Two common Deadlocks by Yong Huang
  2. Tanel Poder: Querying the current tracefile name, using SQL – with tracefile_identifier
  3. Oracle Base on Virtual Columns

07 May 2015

Splitting a comma delimited string the RegExp way, Part Three

The article read most often on this blog is called "Splitting a Comma Delimited String, the RegExp way".
On this blog there are two articles about this technique, one is about splitting up a single string into multiple rows and the other is about multiple strings into multiple rows.
Links to both articles are included at the bottom of this article.
It seems like there is a need for functionality like that frequently. And just to add to those two articles on the subject, here is a third one combining the first two articles.

Recently I was asked for help in a comment on how to go about and split up a string like the following

      'ABC/FDF,RET/YRT,UYT/ERT'
   
The expected outcome would be
      ABC
      FDF
      RET
      YRT
      UYT
      ERT
   

As you can see the input string consists of two different delimiters, namely a comma and a forward slash (/).
To split this string up, you will need both techniques from the other articles.

Let's start with a variable containing the input string.

      SQL> var input varchar2(150)
      SQL> 
      SQL> exec :input := 'ABC/FDF,RET/YRT,UYT/ERT'

      PL/SQL procedure successfully completed.
   

The first step is to split the string up using the first method, split up the string using the comma as a delimiter.

SQL> select regexp_substr (:input, '[^,]+',1, rownum) str
  2    from dual
  3   connect by level <= regexp_count (:input, '[^,]+')
  4  ;

STR
-------
ABC/FDF
RET/YRT
UYT/ERT
   
This will leave us with three records each consisting of a string that needs further splitting up, but this time with the forward slash as the delimiter.

Using these rows as the input in the next phase, use the technique described in the second article.
By introducing Subquery Factoring (lines 1-5), create a named query "commas"

   SQL> with commas
     2  as
     3  (select regexp_substr (:input, '[^,]+',1, rownum) str
     4    from dual
     5   connect by level <= regexp_count (:input, '[^,]+'))
     6   select regexp_substr (str, '[^\/]+', 1, rn) split
     7   from commas
     8   cross
     9   join (select rownum rn
    10       from (select max (regexp_count(rtrim (str, '/')||'/', '\/')) mx
    11        from commas
    12     )
    13    connect by level <= mx
    14    )
    15   where regexp_substr (str, '[^\/]+', 1, rn) is not null
    16   ;

   SPLIT
   ----------------------------------
   ABC
   FDF
   RET
   YRT
   UYT
   ERT
The forward slash has special meaning with regular expressions it needs to be escaped using a backslash.
You can see this on lines 6, 10, and 15.
What is interesting, or at least I find interesting, is the use of the RTRIM on line 10.
Each value per line is not completely delimited by the forward slashes, the trailing one is missing. Just to concatenate one to each line would be to easy, what if there is a trailing slash?
The RTRIM removes the trailing slash and concatenates one at the end, making sure that the string is split up at the right place.

Links

07 April 2014

Current_Schema and the Data Dictionary

Being a huge fan of Logger, the PL/SQL logging utility, I really wanted this be to included in the project that I'm currently working on. So I downloaded it (link at the bottom of this blog) and included it in our deployment scripts. Done.... at least I thought so, but of course this wasn't the case.

The regular install script for Logger looks something like the following (parts removed and table names are changed):

set verify off serveroutput on

begin
   execute immediate 'create table new_table (x int)';
end;
/

prompt ****
prompt *** Query user_tab_columns
prompt ****
declare
    l_nullable varchar2(32767);
begin
   select nullable
    into l_nullable
    from user_tab_columns
   where table_name = 'NEW_TABLE'
     and column_name = 'X'
   ;
   dbms_output.put_line('Found the NEW_TABLE, do some stuff to it');
end;
/
The first lines make sure that the user is not prompted for input and to see output from DBMS_OUTPUT.
The "Create table" is done using dynamic SQL after which some more action is taken. To determine which action needs to be taken is handled in the PL/SQL block - now simply replace by some DBMS_OUTPUT. In this PL/SQL block the datadictionary view USER_TAB_COLUMNS is queried.
This install script is usually run in the schema where the objects need to be installed.

This time, however, the install script needed to be adjusted for the DBA to run the script. As the DBA didh't want to log into different schema's during the complete install, the files needed to include the following at the top of the scripts:

alter session set current_schema = &1;
What this does is best explained in the Oracle documentation:
This setting offers a convenient way to perform operations on objects in a schema other than that of the current user without having to qualify the objects with the schema name. This setting changes the current schema, but it does not change the session user or the current user, nor does it give the session user any additional system or object privileges for the session.
The install script was now called (by the DBA) as follows:
@install.sql SCOTT

Adding the ALTER SESSION command at the top of the install script produced this output

Session altered.


PL/SQL procedure successfully completed.

****
*** Query user_tab_columns
****
declare
*
ERROR at line 1:
ORA-01403: no data found
ORA-06512: at line 4
I would have expected that changing the CURRENT_SCHEMA would also query the USER_TAB_COLUMNS related to the schema that we changed into. This is not the case.

New install script

The install script needed to be adjusted to query the DBA_TAB_COLUMNS instead of the USER_TAB_COLUMNS:

set verify off serveroutput on

alter session set current_schema = &1;

begin
   execute immediate 'create table new_table (x int)';
end;
/

prompt ****
prompt *** Query dba_tab_columns
prompt ****
declare
    l_nullable varchar2(32767);
begin
   select nullable
    into l_nullable
    from dba_tab_columns
   where owner = upper ('&1')
     and table_name = 'NEW_TABLE'
     and rownum = 1;
   dbms_output.put_line('Found the NEW_TABLE, do some stuff to it');
end;
/
Now the install script works as desired:
****
*** Query dba_tab_columns
****
Found the NEW_TABLE

PL/SQL procedure successfully completed.

Difference between SESSION_USER and CURRENT_SCHEMA

SQL> col session_user format a35
SQL> col current_schema format a35
SQL> select sys_context ('userenv', 'session_user') session_user
  2        ,sys_context ('userenv', 'current_schema') current_schema
  3    from dual
  4  /

SESSION_USER                        CURRENT_SCHEMA
----------------------------------- -----------------------------------
SYS                                 SCOTT

Not so strange

Knowing what you know after reading the above, the following is not so strange anymore:

SQL> desc t
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------
 STR                                                VARCHAR2(25)


SQL> select column_name
  2        ,data_type
  3    from user_tab_columns
  4   where table_name = 'T'
  5  /

COLUMN_NAME                    DATA_TYPE
------------------------------ -------------------------
ID                             NUMBER

These scripts were run on the following version:

SQL> select *
  2    from v$version
  3  /

BANNER
-------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

Links

18 November 2013

PIVOT and UNPIVOT

The PIVOT and UNPIVOT operators were introduced in Oracle 11g and the other day I helped out a friend using these operators.
He wanted the data in the columns to be "shifted to the left". To explain a little more, take a look at the following data:
        ID V1    V2    V3    V4    V5
---------- ----- ----- ----- ----- -----
         1 a     b                 e
         2 aaa   bbbb  cccc
         3             a     b     e
         4 a           c           e
         5       b           d
Above is the data as it appeared in the table.
The requirement was to shift the data to the left like the data below:
        ID C1    C2    C3    C4    C5
---------- ----- ----- ----- ----- -----
         1 a     b     e
         2 aaa   bbbb  cccc
         3 a     b     e
         4 a     c     e
         5 b     d

The data in the columns is moved over one or more columns to the left.

11 January 2012

Upgrade Oracle XE from 10 to 11: A word of caution

Finally I had some time to upgrade my Oracle XE database from version 10 to version 11, something I wanted to try out but could never find the time to do it. I wanted to try it out, to see if it all worked as advertised, usually it does. This time however I ran into some problems with the export and import of the APEX applications, good thing it was just on my sandbox database.

04 October 2011

Oracle Open World 2011: Oracle Database 11g Features for Developers by Connor McDonald

Yesterday I had the pleasure of attending a session by Connor McDonald. I heard a lot about him, his presentation style, and I even did a book review back in 2005. Everybody I talk to who attended a session by Connor - ever - is always very positive on his presentation style. And it is very impressive, humor, technical knowledge, all the ingredients are there to keep you focused on the content. The opening slide, he put on as you walked in the room invited you to move forward as the presentation has code sample in a fontsize which he showed on the slide. The room filled up, and it was a large room. There was another benefit of being in the front of the room, apart from being able to read the slides, he handed out chocolates - starting from the front. Talking to Connor later in the evening, he told me it takes months to prepare for a presentation, and it really shows. If you get a chance to see Connor do a presentation, attend it and make sure to arrive early! Part of the presentation was on Edition Based Redefinition, and because I presented on that subject before I believed there was a flaw in his presentation. But there was not, I was mistaken. The rest of this blogpost shows you where I was mistaken, and Connor was absolutely right.

09 September 2011

24 August 2011

Business Rule: Only One per Day, but keep the time

The business rule states:

Only one entry is allowed per ID and per day and the time should be recorded.

The table involved (simplified for the blog post)

SQL> create table test
2 (id number
3 ,inspection_dt date
4 );

Wouldn't it be nice if it was possible to do it like this?

SQL> create table test
2 (id number
3 ,inspection_dt date
4 ,constraint one_per_day unique (id, trunc (inspection_dt))
5 );
,constraint one_per_day unique (id, trunc (inspection_dt))
*
ERROR at line 4:
ORA-00904: : invalid identifier

This way you still have the complete date information (time is a component of the date column), and only use the TRUNC (inspection_dt) to constrain the data entry.
As you can tell from the error message, this is not allowed.
Oracle 11g Release 1 introduced Virtual Columns which can implement this requirement declaratively.

26 June 2011

Oracle 11g Express Edition bug?

Of course you know that Oracle has released 11g Express Edition, still in Beta though. That is probably why I ran into something weird -must be a bug- while playing with it. Here is what I did:

SQL> set echo on
SQL> col "Current_Edition" format a20
SQL> col object_name format a20
SQL>
SQL> select sys_context('userenv'
2 ,'current_edition_name'
3 ) "Current_Edition"
4 from dual
5 /

Current_Edition
--------------------
ORA$BASE

1 row selected.

SQL>
SQL> create or replace
2 procedure hello
3 is
4 begin
5 dbms_output.put_line ('Hello World');
6 end hello;
7 /

Procedure created.

SQL>
SQL> begin
2 hello;
3 end;
4 /
Hello World

PL/SQL procedure successfully completed.

SQL>
SQL> select object_name
2 , object_type
3 , edition_name
4 from user_objects_ae
5 where object_name = 'HELLO'
6 /

OBJECT_NAME OBJECT_TYPE EDITION_NAME
-------------------- ------------------- ------------------------------
HELLO PROCEDURE

1 row selected.

SQL>

20 December 2010

Using DBMS_UTILITY to aggregate strings

In my presentation "Analytic Functions Revisited" there is a section on how to string together some columns, string aggregation. Why? There is a new function in Oracle 11g Release 2, called LISTAGG which allows you to do this. This LISTAGG function also has an Analytic counterpart and that's the reason it is in the presentation.
At the UKOUG TEBS conference I did this presentation and someone approached me after wards to talk about these techniques. It boiled down to "Why do it the hard way when there is a built in package that allows you to do this for you?"
The built in package he was referring to is, note the title of this blog entry, DBMS_UTILITY. There are some downsides of using DBMS_UTILITY, more than I initially thought.

01 November 2010

Edition Based Redefinition and USER_OBJECTS_AE

Last week the Oracle PL/SQL Programming (OPP) and APEXposed conference took place in Brussels, Belgium.

My session was on Edition Based Redefinition, the killer feature of the Oracle 11g Release 2 database. One of my demo's showed what a procedure looks like when you override it in a newer edition from the datadictionary standpoint.
In earlier releases of the Oracle database it was not possible to have two procedures (or any other object as a matter of fact) with the same name in the same database schema.
With Edition Based Redefinition you can have two procedures with the same name, as long as they are in different Edtions.
For this demo I will use the new datadictionary view USER_OBJECTS_AE.

11 July 2010

Connecting Without TNSNames With EZConnect

On my laptop I run two databases, Oracle 10g Release 2 and Oracle 11g Release 2. Next to my regular consulting work, I also give an Oracle Advanced SQL and PL/SQL training called "7Up". This training covers "all" new features since Oracle7.
I tried using VM for having multiple versions of the database, but my laptop would grind to a halt.
Anyway, usually I use SQL*Plus for all my demo's - SQL*Plus Windows that is. As you may know, SQL*Plus Windows vanished from the Oracle 11g database.
Until now I was using SQL*Plus commandline instead, and that was fine.
Last week I received the latest Oracle Magazine (July/August 2010) at home and in it was the AskTom column on connecting without TNSNames.

04 June 2010

Check your Datatype, also in Check Constraints

A question came up at the Oracle discussion forum on SQL and PL/SQL regarding not being able to see any data even though the table did contain records. My lesson from this question was to pay close attention to the datatypes and implicit conversions taking place. What still puzzles me is a way to detect the mistake made apart from looking closely.
Anyway these queries didn't return any records:

SELECT * FROM problem WHERE solved = '0';
SELECT * FROM problem WHERE solved = 0;


07 April 2010

Oracle 11gR2: Editions and SQL Developer

I know the correct term is Edition Based Redefinition, but that would make the title of this blog post a bit too long.
Over the last weekend Oracle 11gR2 was released on the Windows platform, good news for me. On my laptop I run Windows so I could finally upgrade my demo database from Release 1 to Release 2. I had been doing demo's with a virtual machine, which was ok. Like the one I did last year during the Planboard Symposium
While I was preparing for that presentation, the support for Edition Based Redefinition was added to SQL Developer 2.1, at least that's what the New Features stated. But since March 1st there is SQL Developer 2.1.1 which does have support for Edition Based Redefinition.
This blogpost is not going to explain Edition Based Redefinition in detail, but will show you the support that SQL Developer has for Edition Based Redefinition.

16 January 2010

The Case for the Case Statement

Oracle 9i introduced the CASE Statement and CASE Expressions.
Some say that the CASE statement is a drop-in replacement of the IF statement, but there is a subtle difference between these two.
In this post I will explain the difference between the Case statement and the IF statement. Sometimes it's "better" to change the IF statement to a CASE statement.

19 December 2008

Business Rule: Only One Clerk per Department

During the 7Up workshop, a workshop geared towards experienced developers who want to get up to speed with everything that happened in Oracle since release 7, one of the “tricks” that passes by is how to implement the Business Rule:

Only one Clerk per Department is Allowed

The way we show how to declaratively implement this Business Rule is by using a Unique Function Based Index. Every now and then someone will comment that the implementation is a hack, “'cause an Index is not meant to implement a Business Rule, it’s there to enhance performance.”
I don’t necessarily agree with this, but I do believe that Oracle 11g offers a more elegant solution (be it very similar, but is considered less of a hack).

First let’s take a look at the “hack”, with the Unique Function Based Index. Then we’ll look at the way to do the same in Oracle 11g. Lastly I will show you the similarity between the two.

Unique Function Based Index


First off, the Function Based Index. FBI’s were introduced in Oracle 9i and allow you to create an index using a function.
Especially useful when you have a query like

select *
from emp
where upper (ename) = :ename
;
When the ename column in the example above is indexed (the regular way) you can not use this index because of the UPPER function.
Indexing the ename column with the UPPER function would allow use of the index. Here is the code for this example:
create index upper_name
on emp (upper(ename))
When the index is created this way, it can be used with the first statement.

The way to implement the Business Rule “Only one Clerk per Department”:
create UNIQUE index one_clerk_per_deptno
on emp (case job
when 'CLERK'
then deptno
end
)
Using a simple CASE expression only DEPTNO are used in the index when the JOB is equal to CLERK. Because it is a unique index, there can only be one CLERK in each department.

Virtual Columns


Some might call the implementation of the Business Rule in the previous section a “hack”. Instead of using indexes to increase performance, the index is used to declaratively implement a Business Rule.
Oracle 11g introduced Virtual Columns. I really like Virtual Columns more and more because of the multitude of possibilities. Using Virtual Columns to implement this Business Rule is very straight forward.
Instead of using the simple CASE expression in an index, we will use it in the definition of the Virtual Column:
alter table emp
add one_clerk_per_dept as (case
when job='CLERK'
then deptno
end
)
With the statement above an extra column is added to the EMP table. This Virtual Column, based on the CASE expression, will only show a value when the JOB equals CLERK.
SQL> select job
2 , one_clerk_per_dept
3 from emp
4 /

JOB ONE_CLERK_PER_DEPT
--------- ------------------
CLERK 20
SALESMAN
SALESMAN
MANAGER
SALESMAN
MANAGER
MANAGER
ANALYST
PRESIDENT
SALESMAN
CLERK 20
CLERK 30
ANALYST
CLERK 10

As you can see in the resultset above, the ONE_CLERK_PER_DEPT has a value in its column when the JOB is a CLERK. Because the Business Rule says “Only one Clerk per Department” we need to make this column UNIQUE.
With this data this is not possible, because department has two CLERKS. One has to go…
SQL> update emp
2 set job = 'NOCLERK'
3 where job = 'CLERK'
4 and rownum = 1
5 /

1 row updated.

SQL> alter table emp
2 add constraint one_clerk_uk UNIQUE (one_clerk_per_dept)
3 /

Table altered.

SQL> update emp
2 set job = 'CLERK'
3 where job = 'NOCLERK'
4 /
update emp
*
ERROR at line 1:
ORA-00001: unique constraint (ALEX.ONE_CLERK_UK) violated
And now the Business Rule is implemented in a more “natural” way, without resorting to a “hack”.


The Same Difference…


Both methods implement the same Business Rule. Both implement the Business Rule in a declarative way. To some people the first method feels like a “hack” and find the second method more “natural”. But how different are they really?
Not as much as you might have suspected. When you use the first method (hacking with Function Based Indexes), the USER_IND_COLUMNS datadictionary view reveals the implementation.

SQL> select column_name
2 from user_ind_columns
3 where index_name = 'ONE_CLERK_PER_DEPT'
4 /

COLUMN_NAME
---------------------------------------------------------
SYS_NC00010$

SQL> select job
2 , SYS_NC00010$
3 from emp
4 /

JOB SYS_NC00010$
--------- ------------
NOCLERK
SALESMAN
SALESMAN
MANAGER
SALESMAN
MANAGER
MANAGER
ANALYST
PRESIDENT
SALESMAN
CLERK 20
CLERK 30
ANALYST
CLERK 10
A hidden column is added to the table with a very obscure name. Apparently a Function Based Index is pretty similar to a Virtual Column.