12 November 2020

Automatically Not Null when adding a DEFAULT Clause

Today I learned something new about the DEFAULT ON NULL clause.
Since Oracle database release 12 it is possible to define a DEFAULT ON NULL clause which is used when a NULL is provided as value. The thing I didn't know is that the column is automatically defined as NOT NULL.
Is this documented behaviour? Yes, it is. Quote from the documentation:

When you specify ON NULL, the NOT NULL constraint and NOT DEFERRABLE constraint state are implicitly specified.

SQL> select banner
  2    from v$version
  3  /

Oracle Database 19c Enterprise Edition Release - Production
SQL> create table t 
  2  (col1 varchar2(10) default on null 'hello')
  3  /

Table T created.

SQL> desc t
Name Null?    Type         
---- -------- ------------ 

And in case you are wondering if you can change the column afterwards to NULL, the answer is "no".

SQL> alter table t 
  2  modify (col1 null)
  3  /

Error starting at line : 1 in command -
alter table t 
modify (col1 null)
Error report -
ORA-01451: column to be modified to NULL cannot be modified to NULL
01451. 00000 -  "column to be modified to NULL cannot be modified to NULL"
*Cause:    the column may already allow NULL values, the NOT NULL constraint
           is part of a primary key or check constraint.
*Action:   if a primary key or check constraint is enforcing the NOT NULL
           constraint, then drop that constraint.

22 October 2020

Qualified Expressions Quite Puzzling - Part Three: Bug!

After some back and forth with Patrick Barel and Steven Feuerstein, it became clear that it most likely is a bug with Qualified Expressions.
A simple case to reproduce the bug is below

SQL> set serveroutput on

SQL> declare
  2     type tbl_t is table of pls_integer
  3        index by pls_integer;
  4     l_tbl tbl_t;
  5     l_idx pls_integer;
  6  begin
  7     for i in 1..3
  8     loop
  9        l_tbl := tbl_t (i => i);
 10     end loop;
 11     --
 12     l_idx := l_tbl.first;
 13     while l_idx is not null
 14     loop
 15        sys.dbms_output.put_line (to_char (l_idx)
 16                                 ||' - '||
 17                                 to_char (l_tbl(l_idx))
 18                                 );
 19        l_idx := l_tbl.next (l_idx);
 20     end loop;
 21  end;
 22  /

1 - 1
2 - 2
3 - 3

PL/SQL procedure successfully completed.
Why it is the above a bug? If you do the a similar assignment, but without a loop, the outcome is different (and most likely correct)
SQL> declare
  2     type tbl_t is table of pls_integer
  3        index by pls_integer;
  4     l_tbl tbl_t;
  5     l_idx pls_integer;
  6  begin
  7     l_tbl := tbl_t (1 => 1);
  8     l_tbl := tbl_t (2 => 2);
  9     l_tbl := tbl_t (3 => 3);
 10     --
 11     l_idx := l_tbl.first;
 12     while l_idx is not null
 13     loop
 14        sys.dbms_output.put_line (to_char (l_idx)
 15                                 ||' - '||
 16                                 to_char (l_tbl(l_idx))
 17                                 );
 18        l_idx := l_tbl.next (l_idx);
 19     end loop;
 20  end;
 21  /

3 - 3

PL/SQL procedure successfully completed.


21 October 2020

Qualified Expressions Quite Puzzling - Part Two

After I published my blog about Qualified Expressions, my dear friend Patrick Barel made a comment that my remarks about the Cursor For Loop was not at all what he expected to happen.

This is the code block that Patrick referred to.

 10     for r in (select * from t)
 11     loop
 12           l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
 13                                                    ,num2 => r.id));
 14     end loop;
with these results
1 - 1
2 - 2
3 - 3
A different way of looking at this code example might expect the outcome to be only one record, being the last one.
Each iteration would replace the complete Associative Array with a new instance of it.

So, there was another Puzzling thing in the original blogpost that I hadn't noticed before, probably because I wasn't looking for it. Why isn't the whole Associative Array replaced? It seems that this code would append values in the Associative Array and this was my objective. Strange indeed.
Anyhoo, probably the better way of working with Qualified Expressions to append values to the Associative Array, the index should be on the other side of the assignment:
 10     for r in (select * from t)
 11     loop
 12        l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
 13                                         ,num2 => r.id);
 14     end loop;

When inserting the pointless conditions inside the For Loop, all values from the table are inserted into the Associative Array

 10     for r in (select * from t)
 11     loop
 12        if r.yes_no = 'Y'
 13        then
 14           l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
 15                                            ,num2 => r.id);
 16        else
 17           l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
 18                                            ,num2 => r.id);
 19        end if;
 20     end loop;
with all records from the table that get populated in the Associative Array:
1 - 1
2 - 2
3 - 3

20 October 2020

Qualified Expressions Quite Puzzling

Qualified expressions for PL/SQL were introduced in version 18 of the Oracle database. They provide a clearer way of working with collections. At the bottom of this post is the link to the official documentation.
However recently I encountered something that left me puzzled and I still can't explain the reason why. So this blogpost is just about my observations and I welcome any comments.

First a short demo of how you can work with Qualified Expressions.

  SQL> select banner
  2    from v$version;

Oracle Database 18c EE High Perf Release - Production   
  SQL> set serveroutput on

SQL> declare
  2     type rec_t is record
  3         (num1 number
  4         ,num2 number);
  5     type tbl_t is table of rec_t
  6        index by pls_integer;
  7     l_tbl tbl_t;
  8     l_idx pls_integer;
  9  begin
 10     for i in 1..3
 11     loop
 12        l_tbl := tbl_t (i => rec_t (num1 => i
 13                                   ,num2 => i));
 14     end loop;
 15     --
 16     l_idx := l_tbl.first;
 17     while l_idx is not null
 18     loop
 19        sys.dbms_output.put_line (to_char (l_tbl(l_idx).num1)
 20                                 ||' - '||
 21                                 to_char (l_tbl(l_idx).num2)
 22                                 );
 23        l_idx := l_tbl.next (l_idx);
 24     end loop;
 25  end;
 26  /
On line 2 - 4 a Record Type is declared with two numeric attributes. Line 5 and 6 define an Associative Array of the Record Type.
The Qualified Expression is on lines 12 and 13.
In the loop we assign the loop index variable to the Record Type using the Qualified Expression syntax.
To view the content of the Associative Array, lines 16 through 24 are used. When running this block of code the output is:
1 - 1
2 - 2
3 - 3
No surprises here, it works as expected. Well, actually it is surprising take a look at the follow-up blog.

Cursor For Loop

The previous example used a Numeric For Loop, but I observed strange things when using a Cursor For Loop.

  SQL> drop table t purge
  2  /

Table T dropped.

SQL> create table t
  2  (id number
  3  ,yes_no varchar2(1)
  4  )
  5  /

Table T created.

SQL> insert into t values (1, 'Y');

1 row inserted.

SQL> insert into t values (2, 'N');

1 row inserted.

SQL> insert into t values (3, 'Y');

1 row inserted.

SQL> commit;

Commit complete.
The table contains three records with different Yes/No-indicators. Instead of showing the complete block of code I will only show the changes that I made using the previous example.
 10     for r in (select * from t)
 11     loop
 12           l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
 13                                                    ,num2 => r.id));
 14     end loop;
The Numeric For Loop is removed and a Cursor For Loop is inserted in the code block. Instead of using the index-variable from the first code example, I am using the current count of the Associative Array and add one to that (l_tbl.count + 1).
The results are the same as the first code block.
1 - 1
2 - 2
3 - 3

...but you said Strange Things?

Yes, I did... but so far, it seems normal.
The strange thing happens when a condition is added inside the For-Loop.

 10     for r in (select * from t)
 11     loop
 12        if r.yes_no = 'Y'
 13        then
 14           l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
 15                                                    ,num2 => r.id));
 16        else
 17           l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
 18                                                    ,num2 => r.id));
 20        end if;
 21     end loop;
The table has a Yes_No column and inside the For Loop, the value of this column is used.
When you examine the code above, you will notice that it will assign a value to the collection regardless of the value of the column. The assignment is exactly the same in lines 14-15 and 17-18. The content of the Associative Array after the loop should be the same as before, but this is not the case.
The result looks like this:
  1 - 1
  3 - 3
What happened to the second record?... Your guess is as good as mine.

I've tried several variations, like using a local counter to assign instead of the collection.count + 1, but that doesn't make a difference.
Like said before: I welcome your comments for more insight into this strange behavior.

The same is reproducable on Oracle 19...

  SQL> select banner 
  2    from v$version
  3  /

Oracle Database 19c Enterprise Edition Release - Production  


05 June 2020

Conditional Compilation; One Dynamic Package Specification

When you want to make a distinction between different databases (like Development, Test, Acceptance, and Production) and have a single code base, conditional compilation can help out. Most of the time I read about conditional compilation when the use case is to eliminate code from the production environment, like instrumentation, which is of course a legit scenario. Some functionality should never run on Production, but likewise some functionality should never run on Dev, Test or Acceptance.

For example; we sent out emails to customers about their order statusses. This functionality should only be run on the Production database, and never-ever on any of the other databases. Initially there was a toggle in the application dictating if the emails should go to customers or to a dummy email address within the company. How fool-proof is this solution? Short answer: it's not. Sooner or later there will be someone who sets the toggle to "Sent to Customers" in the wrong environment. Of course there are many more examples like this one.

Anyway, to determine the environment which can be used for conditional compilation, I use a package specification which is compiled dynamically. Using the database name, which can be found in the USERENV context, the value is determined. To output a textual representation of a boolean value, I use LOGGER.TOCHAR to output "true", "false" or "null", but of course you can also write your own boolean-to-text-translation.

   l_pack_spec varchar2(32767);
   l_env varchar2(25);
   l_env := sys_context ('userenv', 'db_name');
   l_pack_spec := 'create or replace package environment_pkg'||chr(10);
   l_pack_spec := l_pack_spec ||'is'||chr(10);
   l_pack_spec := l_pack_spec ||'   --=='||chr(10);
   l_pack_spec := l_pack_spec ||'   -- Environment Information, useful for conditional compilation'||chr(10);
   l_pack_spec := l_pack_spec ||'   development constant boolean := '||lower (logger.tochar (l_env = 'DEV'))||';'||chr(10);
   l_pack_spec := l_pack_spec ||'   test        constant boolean := '||lower (logger.tochar (l_env = 'TST'))||';'||chr(10);
   l_pack_spec := l_pack_spec ||'   acceptance  constant boolean := '||lower (logger.tochar (l_env = 'ACC'))||';'||chr(10);
   l_pack_spec := l_pack_spec ||'   production  constant boolean := '||lower (logger.tochar (l_env = 'PRD'))||';'||chr(10);
   l_pack_spec := l_pack_spec ||'   --=='||chr(10);
   l_pack_spec := l_pack_spec ||'end environment_pkg;'||chr(10);
   execute immediate l_pack_spec;

When you run the code above, the resulting code will be something like:

package environment_pkg
   -- Environment Information, useful for conditional compilation
   development constant boolean := true;
   test        constant boolean := false;
   acceptance  constant boolean := false;
   production  constant boolean := false;
end environment_pkg;

Indicating that the current database is the Development database.

Now you can use conditional compilation to include or exclude certain code sections, like:

$if environment_pkg.production $then
   -- Here comes the code to send an email to customers
   -- Here comes the code to send an email to a dummy internal address

To see the effect of conditional compilation, what does the code actually look like, you can use the built in package DBMS_PREPROCESSOR.PRINT_POST_PROCESSED_SOURCE which output the result to DBMS_OUTPUT

set serveroutput on
   dbms_preprocessor.print_post_processed_source (object_type    => 'PACKAGE BODY'
                                                 ,schema_name    => user
                                                 ,object_name    => 'MAIL_PKG' -- package where you use the conditional compilation

24 April 2020

First and Last Day of the ISO-year

The customer had the request for a report regarding the sales figures Year to Date, per week. More specifically: ISO-week.
More information about ISO-week, go to Wikipedia: ISO-week
Initially I didn't think too much about it, I know Oracle provides the format mask "IW" to get the ISO-week number, and thought that would do. But there is a little bit more to consider.
More information on format masks in Oracle Database 19c: Format Masks

Formatting the date to reflect the ISO-week is easy enough:

to_char (sysdate, 'iw')
which would give you output like '03' or '14'.
Just formatting the dates with this format mask could be sufficient... but there might be a snag to it.

The customer in my case wanted to see the data per year, so the first selection would be the year. Then a report would be shown with the number of order per ISO-week.
To get all the relevant data from the database I would add a filter like this one:
where order_date between to_date ('01-01-'||:P_YEAR, 'dd-mm-yyyy')
                     and to_date ('31-12-'||:P_YEAR, 'dd-mm-yyyy')
to select all the data for a given calendar year. The benefit of using calendar years is that you know they will always start on January first and end on December 31. Simply concatenate the year that the customer selected and turn that string into a proper DATE.

While I was testing the reports for the customer, I noticed that when I selected 2019 as the year, I would also see data for week 01. What's so strange about that? Well, the system didn't go live until August of 2019. There shouldn't be a data for the first week of 2019. Searching though the database backed up my assumption: the earliest orders in the system started in August 2019.

And this is the trap I fell into.. the combination of a Calendar year with ISO-weeks.
Let's take a closer look at the end of 2019 and the start of 2020.

SQL> with test as 
  2  (select date '2019-12-29' as dt from dual
  3  union all select date '2019-12-30' as dt from dual
  4  union all select date '2019-12-31' as dt from dual
  5  union all select date '2020-01-01' as dt from dual
  6  union all select date '2020-01-02' as dt from dual
  7  union all select date '2020-01-03' as dt from dual
  8  union all select date '2020-01-04' as dt from dual
  9  union all select date '2020-01-05' as dt from dual
 10  )
 11  select dt
 12        ,to_char (dt, 'iw') as iso_week
 13        ,to_char (dt, 'ww') as calendar_week
 14   from test
 15 /

-------- ---------- ---------------
29-12-19 52         52             
30-12-19 01         52             
31-12-19 01         53             
01-01-20 01         01             
02-01-20 01         01             
03-01-20 01         01             
04-01-20 01         01             
05-01-20 01         01             

8 rows selected.
Note the difference in the ISO_WEEK and the CALENDAR_WEEK columns. The first ISO_WEEK starts on December 30 2019, while the first CALENDAR_WEEK starts on January 01 2020.
Also note that Calendar week 53 in 2019 is 1 day long.

Because I was selecting all order data in the selected year, up to December 31 2019, and then format it using ISO-weeks, it would show 01 in the report... It was just not week 01 in 2019, but in 2020.

Getting the First Day of an ISO-year

I decided that to get all the data correctly aligned, I would use the first and last day of the ISO-year. The disadvantage is that the first day of an ISO-year is not fixed, it might start in the year before. The same is true for the last day of the ISO-year, it might end in the year after the one you selected.
Using the format masks (in the documentation link at the top of this blog) it is also possible to pass in an ISO-year.

SQL> select trunc (date '2020-01-01', 'iyyy')
  2    from dual
  3  /

This seems to work, but it doesn't ... or at least not for the purposes that I need it.
If I pass in January 01 2020 I get the first day of the ISO-year, which is December 30 2019. That's correct.
When passing in January 01 2021, I get the following result:
SQL> select trunc (date '2021-01-01', 'iyyy')
  2    from dual
  3  /

And this is not what I expected... Turns out that January 01 2021 is still in ISO-year 2020, and the first day of the that ISO-year is December 30 2019.

The "trick" is not to pass in a date that might not be in the same year as the ISO-year. Don't choose any date in the last calendar week of December or the first calendar week of January.
Picking a random date in the middle of the year will give you the first day ISO-year:

SQL> select trunc (date '2021-08-05', 'iyyy')
  2    from dual
  3  /


Getting the Last Day of an ISO-year

Do the same trick as with getting the first day of the ISO-year.
Get the first day of the following ISO-year that you're interested in and subtract on second:

SQL> select trunc (date '2022-08-05', 'iyyy') - interval '1' second
  2    from dual
  3  /


And the WHERE-clause with the bind variable from the page:

between trunc (to_date (:P_YEAR||'-08-05', 'yyyy-mm-dd'), 'iyyy')
    and trunc (to_date (to_char (to_number (:P_YEAR) + 1)||'-08-05', 'yyyy-mm-dd'), 'iyyy') - interval '1' second

25 March 2020

Taming The Queue - when the third party can only handle so many connections

We had a problem with the awesome power that the Oracle database offers... or to be more specific the third party had a problem with that awesome power.

Setting the stage

Our applications handles many, many orders per day and mostly they are processed in waves during the day. Meaning there are peaks and valleys in the amount of orders that need to be processed. When the orders are processed they are send to a third party application by using a REST Service. Informing the third party doesn't have to be "real time", as long as they get the information.
The way we set up our side of the application is as follows:

  • when the order has a certain status, enough information is placed on a Queue (Advanced Queueing)
  • A callback function on the Queue will construct the JSON and do the REST Call (using APEX_WEBSERVICE)
The advantage of this method is that the operational system can continue working, without having to wait for the REST service to complete. Informing the other system is done in the background.

The Problem

The advantage of having the database of the Queue is that it "auto-scales". Because it runs in the background it can spin up as many "sessions" as it wants to perform the callback function and clear the queue. If the system gets busy, it will scale down the processing of the Queue so that the operational system doesn't slow down to a crawl. Simply put: Awesome.
However the third party could only handle a limited number of connections, four to be precise. The consequence of this was that when the system would spin up many sessions, a lot of them would get an Error: "Concurrent request limit exceeded. Request blocked." and the orders had to be tried again.


As we know that the third party could only handle four "sessions" we prefer to reduce the number of sessions that the database would spin up. However there is (as far as I could tell) no way to tell Oracle "you can only use at most four sessions to clear the Queue".
Reaching out on Twitter suggested adjusting the AQ_TM_PROCESSES parameter but that controls the time, not the number of "sessions".
Another suggestion was to tweak JOB_QUEUE_PROCESSES, but we didn't try that to be honest. There are other processes that utilize DBMS_SCHEDULER (e.g. for importing orders) that we didn't want to interfere with this parameter.
Thanks to Vidar Eidissen and Philipp Salvisberg for your responses on Twitter.

Our Solution

The method that we implemented was as follows:

  1. Get an exclusive lock
  2. Construct the necessary JSON and do the REST Call
  3. Release the lock

To facilitate this method, I created a table which is just to coordinate the locks:

create table lock_handles
   (handle varchar2(10)
   ,nr    number
   ,constraint lh_pk primary key (handle)

insert into lock_handles values ('LOCK1', 0);
insert into lock_handles values ('LOCK2', 0);
insert into lock_handles values ('LOCK3', 0);
insert into lock_handles values ('LOCK4', 0);

The only purpose that this table serves is just for locking, thinking about it a little bit more I doubt that I need more than one column though.

Next a function to get a lock, of course this function is placed in a package (for the example, the name of the package is "MYPKG".

  function get_lock
    return lock_handles.handle%type
    e_locked exception;
    pragma exception_init (e_locked, -54);
    function get_lock (p_handle in lock_handles.handle%type)
        return lock_handles.handle%type
        l_nr  lock_handles.nr%type;
        select lh.nr
          into l_nr
          from lock_handles lh
        where lh.handle = p_handle
          for update nowait;
        return p_handle;            
    end get_lock;
     for i in 1..4
           return get_lock ('LOCK'||to_char (i));
        when e_locked
      end loop;
      -- wait for a little bit before attempting 
      -- to get the lock again by recursively 
      -- calling this function
      sys.dbms_session.sleep (10);
     return mypkg.get_lock;
  end get_lock;
When calling this function, at first it will attempt to lock the row in the table with handle "LOCK1". Instead of waiting (FOR UPDATE NOWAIT) when that row is locked, an exception is raised. With the Pragma Exception Init, the exception is named E_LOCKED and thus can be handled by name.
When the exception is raised, another session has a lock on that row, it will attempt to lock the row in the table with handle "LOCK2", and so on.
If no locks can be obtained, tried and exception is raised each time, the function will wait for a little bit (10 seconds) and than call the function recursively to start all over again.
Let's say that a certain function call obtains a lock on the row, the calling program can continue and do the REST call.
When the REST Call is completed, the lock on the table is released by ending the transaction by issuing a commit.

Does it work?

Until sofar the theory, but does it work? Which is of course a valid question and I was shocked by my first tests. I tried processing a mere 15 orders and to my surprise I discovered that only the first lockhandle, LOCK1, was used.
It dawned that the queue could easily be handled by a single session, so the callback would just serialize the processing.
Increasing the load on the Queue did indeed show that all lockhandles were used. Also nice to see that at after a certain time, the usage of the lockhandle also decreases. It scales to whatever Oracle deems a good method of processing the Queue.

Could it be handled differently? Yes, probably and I'm open for suggestions.
Could the code be improved? Yes, probably and I'm open for suggestions.


Just as I finishing up this blogpost, Connor McDonald chimed in on Twitter.
Well, yes Connor, yes, it certainly does...
The rewritten method is as follows
   function get_lock
      return varchar2
      l_lock integer;
         for i in 1..4 -- max number of concurrent sessions
            l_lock := dbms_lock.request (id                => to_number ('223344'||to_char (i)) -- arbitrary locknr
                                        ,lockmode          => dbms_lock.x_mode -- exclusive mode
                                        ,timeout           => 0 -- don't wait to obtain a lock
                                        ,release_on_commit => true -- release when transaction ends
            if l_lock = 1 -- Timeout
               continue; -- try the next lock
               return 'LOCK'||to_char (i);
            end if;
         end loop;
         -- A variation to the suggestion that
         -- Connor made in the comments.
         -- I've added an extra pause here
         -- before attempting to get another lock
         sys.dbms_session.sleep (10);
      end loop;
   end get_lock;
No need for the extra table, and the code is a lot shorter.