tag:blogger.com,1999:blog-12717632270025538352024-03-16T02:11:57.974+01:00Notes on OracleOracle Things I Got to Remember Not to ForgetAlex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.comBlogger159125tag:blogger.com,1999:blog-1271763227002553835.post-55645542229638574462023-02-23T11:01:00.001+01:002023-02-23T11:01:07.245+01:00APEX Interactive Grid: Cell Selection as Default<p>
When you want to copy a specific value from an Interactive Grid, you would need to change Row Selection to Cell Selection in the Actions menu.<br>
When you don't want to expose the Actions-menu, or just to make it more convenient for your users, having Cell Selection as the Default might be more convenient.<br>
To change this the Default, and set Cell Selection as the Default, add the following code to the Javascript Initialization Code section:
</p>
<pre class="brush: sql">
function(config) {
config.defaultGridViewOptions = {
selectCells: true
}
return config;
}
</pre>
<a href="https://docs.oracle.com/en/database/oracle/application-express/20.1/aexjs/grid.html"></a>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-1099529886413445672023-02-01T12:02:00.004+01:002023-02-01T12:02:30.959+01:00Parse CSV-file in different Character Set with APEX_DATA_PARSER<p>
When there are special characters in a CSV-file, and the file is in a different characterset than which you expect, parsing it can give unexpected results.<br>
From different suppliers we get CSV-file, which need to be parsed and stored in the database. This is done by using <a href="https://docs.oracle.com/en/database/oracle/apex/22.2/aeapi/APEX_DATA_PARSER.html#GUID-07E9397C-DF26-40F7-AC73-F46961E1088A" target="_blank" rel="nofollow">APEX_DATA_PARSER</a>.<br>
APEX_DATA_PARSER is very flexible and allows several file-formats: JSON, XML, CSV and even native Excel XLSX.<br>
The files are uploaded to the database (in the background) and processed. And this works like a charm,... until... one of the suppliers provided a CSV-file in a different character set, with special characters.
</p>
<p>
Below is an example of this file:
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLOfgsNyB5deBEXIQlhtBfqOuHWs63juQk_myRPGZQIj2Sj1S-rxQqteAZQ9TtpjJwSEjQ2Tia9WafpM3r27s6FXPZ7ADSJQEzWnGJkXCthXgnWOewZoVgMODAiNoCs_0JptVHgxhdrJXjKLHUUnjCZbTT-PJz9UBWvQfy96zSqUUL9of6YV4FXrsnZQ/s852/charset.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="852" data-original-width="658" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLOfgsNyB5deBEXIQlhtBfqOuHWs63juQk_myRPGZQIj2Sj1S-rxQqteAZQ9TtpjJwSEjQ2Tia9WafpM3r27s6FXPZ7ADSJQEzWnGJkXCthXgnWOewZoVgMODAiNoCs_0JptVHgxhdrJXjKLHUUnjCZbTT-PJz9UBWvQfy96zSqUUL9of6YV4FXrsnZQ/s320/charset.png"/></a></div>
<p>
This file is uploaded to a database table with the following structure:
<pre class="brush:sql">
Name Null? Type
------------ ----- -------------
MIME_TYPE VARCHAR2(255)
FILENAME VARCHAR2(400)
BLOB_CONTENT BLOB
</pre>
The following query is used to parse the file and extract the data:
<pre class="brush:sql">
SQL> select p.col001
2 ,p.col002
3 ,p.col003
4 from a
5 cross
6 apply apex_data_parser.parse (p_content => a.blob_content
7 ,p_file_name => a.filename
8 ) p
9* /
COL001 COL002 COL003
__________________________ __________ __________
R�sselsheim am Main 3.000 624.760
Gnutz 1.000 139.490
�ach-Palenberg 1.000 139.490
Syke 1.000 242.600
H�kirchen-Siegertsbrunn 1.000 136.980
Eberhardzell 1.000 233.810
heinsberg 1.000 67.840
Hohndorf 3.000 304.800
Nidderau 3.000 385.890
N�n-Hardenberg 1.000 239.680
Kl��1.000 185.000
K 2.000 237.880
W�rselen 1.000 196.500
Gl�ckstadt 1.000 136.980
Dessau-Ro�au 1.000 5.900
15 rows selected.
</pre>
As you can see in the results above, the special characters have been removed. When a word in the first column ends in a special character, the parsing of the rest of the line is also messed up, there is a NULL in Column 3 while there should be a value.<br>
Of course this has everything to do with the characterset of the uploaded file.
</p>
<p>
With the following command in a Terminal-window on Mac, it is possible to get information about the file, including the characterset:
<pre>
file -I <i>name of the file</i>
</pre>
In my case that would yield the following results:
<pre>
file -I charset.csv
charset.csv: text/plain; charset=iso-8859-1
</pre>
This means that the file is in the characterset: ISO-8859-1
</p>
<p>
To find out which characterset this maps to in the database, the following query can help:
<pre brush:sql>
SQL> select value
2 from v$nls_valid_values
3 where parameter ='CHARACTERSET'
4 and value like '%8859%'
5* /
VALUE
_________________
WE8ISO8859P1
EE8ISO8859P2
SE8ISO8859P3
NEE8ISO8859P4
CL8ISO8859P5
AR8ISO8859P6
EL8ISO8859P7
IW8ISO8859P8
WE8ISO8859P9
NE8ISO8859P10
WE8ISO8859P15
BLT8ISO8859P13
CEL8ISO8859P14
AZ8ISO8859P9E
14 rows selected.
</pre>
From this list WE8ISO8859P1 is selected and passed in into the APEX_DATA_PARSER function:
</p>
<pre class="brush:sql">
SQL> select p.col001
2 ,p.col002
3 ,p.col003
4 from all_csvs a
5 cross
6 apply apex_data_parser.parse (p_content => a.blob_content
7 ,p_file_name => a.filename
8 ,p_file_charset => 'WE8ISO8859P1'
9 ) p
10* /
COL001 COL002 COL003
_____________________________ _________ __________
Rüsselsheim am Main 3.000 624.760
Gnutz 1.000 139.490
Übach-Palenberg 1.000 139.490
Syke 1.000 242.600
Höhenkirchen-Siegertsbrunn 1.000 136.980
Eberhardzell 1.000 233.810
heinsberg 1.000 67.840
Hohndorf 3.000 304.800
Nidderau 3.000 385.890
Nörten-Hardenberg 1.000 239.680
Klüß 1.000 185.000
Köln 2.000 237.880
Würselen 1.000 196.500
Glückstadt 1.000 136.980
Dessau-Roßlau 1.000 5.900
15 rows selected.
</pre>
<p>
And now all is well.
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-5058777365336395282022-12-21T08:59:00.002+01:002022-12-21T09:03:02.648+01:00APEX Calendar showing incorrect Week Number<p>
The users of our application wanted to have the week number shown as well in the Calendar.<br>
This can be easily done with a little bit of JavaScript. In the "Advanced" section of the Region Attributes, the is room to add "JavaScript Initialization Code".<br>
The "Help" for this region even shows how to add week numbers in the Calendar:
</p>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9bL3ETR8CQk4xSKHxIDgWBZhLwO9hGNiJRtk09sb9EUY5I4bcviQa48PPUxwlpb796uNPIxKvfmlQBIwtvZZsL-7lrf98PkFcaIssgeXsASyjWLFL1J5Y2ba5h32aLx5oxbRVpQ4WCrl2vICcT7p1uBEjeUBcxqJrAG49iqynZ3qblzP9UElZMHMSmg/s1646/Page_Designer.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="320" data-original-height="754" data-original-width="1646" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg9bL3ETR8CQk4xSKHxIDgWBZhLwO9hGNiJRtk09sb9EUY5I4bcviQa48PPUxwlpb796uNPIxKvfmlQBIwtvZZsL-7lrf98PkFcaIssgeXsASyjWLFL1J5Y2ba5h32aLx5oxbRVpQ4WCrl2vICcT7p1uBEjeUBcxqJrAG49iqynZ3qblzP9UElZMHMSmg/s320/Page_Designer.png"/></a></div>
Copy-and-Paste the JavaScript from the Help to the JavaScript Initialization Code and you're done:
<pre class="brush: js">
function ( pOptions) {
pOptions.weekNumbers = true;
pOptions.weekNumberTitle = "CW ";
return pOptions;
}
</pre>
However, in our case, it didn't show the correct week number. Week number 41 and December 12 don't line up:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-uN7j8RMb4BC-5ipP-C4yxfSx7Ll6upfDotTAWXvnK6KVZY9ogpf9rgXir9-J0-PV9Upb8-zxX18ITtXCj1qnF3fExvXI6PKjQ5spVOl3iiX12P5hxBWni9_LkTKI9Livm8fod9xTjUkNzJcShEGOYQUqs62DhmJgnupcY1PsI7wYJVTb64NpTI4uEw/s1196/WrongCalendarWeekNr.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="394" data-original-width="1196" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-uN7j8RMb4BC-5ipP-C4yxfSx7Ll6upfDotTAWXvnK6KVZY9ogpf9rgXir9-J0-PV9Upb8-zxX18ITtXCj1qnF3fExvXI6PKjQ5spVOl3iiX12P5hxBWni9_LkTKI9Livm8fod9xTjUkNzJcShEGOYQUqs62DhmJgnupcY1PsI7wYJVTb64NpTI4uEw/s400/WrongCalendarWeekNr.png"/></a></div>
<p>
Add the following code to the JavaScript Initialization Code in the Advanced section of the Attributes for your Calendar region, and the week number is shown correctly:
</p>
<pre class="brush: js">
function ( pOptions) {
pOptions.weekNumbers = true;
pOptions.weekNumberTitle = "CW ";
pOptions.weekNumberCalculation = "ISO";
return pOptions;
}
</pre>
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvN-75P006U0iqqXaadJrp8_oxvv_diQ0zF8pE4tnH4QAfqu10wF42ovm4MLqlyQoVnWrdMZXU_JO86xCYQ7ba-__WjRK1Rc25c4MN3ffgfDsm0V3MuYwt-a-otxeCS18ZuM98avbv1we6wcvR_q1E56Zpyat3wWJZh9hQuONoFljDUGYL51GaKEA17A/s1196/CorrectCalendarWeekNr.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="394" data-original-width="1196" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvN-75P006U0iqqXaadJrp8_oxvv_diQ0zF8pE4tnH4QAfqu10wF42ovm4MLqlyQoVnWrdMZXU_JO86xCYQ7ba-__WjRK1Rc25c4MN3ffgfDsm0V3MuYwt-a-otxeCS18ZuM98avbv1we6wcvR_q1E56Zpyat3wWJZh9hQuONoFljDUGYL51GaKEA17A/s400/CorrectCalendarWeekNr.png"/></a></div>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-68013745184775425902022-10-11T12:22:00.001+02:002022-10-11T12:22:11.295+02:00Intervals go ISO-8601 #JoelKallmanDay<p>
Another year, and it's time to write an entry for the #JoelKallmanDay.
</p>
<p>
My recent discovery in Oracle Database 21c, the innovation release, is how intervals can also be in ISO-8601 format.<br>
<a link="https://en.wikipedia.org/wiki/ISO_8601" target="_blank">The ISO-8601 format </a>is an international standard for describing Date and Time formats in data exchanges.<br>
There are two flavors of interval in the Oracle Database, <a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/TO_DSINTERVAL.html#GUID-DEBB41BD-9438-4558-A53E-428CE93C05D3" target="_blank">"Day to Second"</a> and <a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/TO_YMINTERVAL.html#GUID-5DEBA096-7AC3-4B18-A4BE-D36FC9BDB450" target="_blank">"Year to Month"</a> and there are specialized functions to convert strings in a certain format to an interval.
The following example creates a "Day to Second" interval of 4 days, 3 hours, 25 minutes and 42 seconds:
<pre class="brush: sql">
SQL> select to_dsinterval ('4 3:25:42')
2 from dual
3* /
TO_DSINTERVAL('43:2
-------------------
+04 03:25:42.000000
</pre>
Feeding this "Day to Second" interval into a <a href="https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/json_scalar.html" target="_blank">JSON_SCALAR</a> function will return the ISO-8601 standard for this interval:
<pre class="brush: sql">
SQL> select json_scalar (to_dsinterval ('4 3:25:42'))
2 from dual
3* /
JSON_SCALAR(TO_DSINTERVAL('43:25:42'))
-------------------------------------------------------------------------
"P4DT3H25M42S"
</pre>
The reverse is also true, it is possible to feed an ISO-8601 interval into the TO_DSINTERVAL function and get an Oracle interval returned:
<pre class="brush: sql">
SQL> select to_dsinterval ('P7DT4H12M53S')
2 from dual
3* /
TO_DSINTERVAL('P7DT
-------------------
+07 04:12:53.000000
</pre>
This works the same for "Year to Day" intervals.
</p>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Rovinj, Croatia45.0811661 13.638706716.770932263821152 -21.5175433 73.391399936178843 48.7949567tag:blogger.com,1999:blog-1271763227002553835.post-70681406374357343632022-04-04T10:19:00.003+02:002022-04-05T09:27:24.501+02:00Interactive Grid: Filter on Column that shows an Icon<p>
For one of our Interactive Grid we wanted to show the status of a column as an icon and still be able to filter on that column. Out of the box this is not possible.
When choosing "HTML Expression" it is possible to show an icon in the column values but filtering is no longer possible.<br>
When choosing "Text" as the column it is possible to filter on the column, but you can't specify an HTML Expression to format the column values.<br>
Let's examine this in a little more detail and also show a solution, provided by <a href="https://twitter.com/WhenOthers" target="_blank">Andreas Wismann</a>
</p>
<p>
Let's start with the query that I will use:
<pre class="brush: sql">
select p.id as pid
,p.assignee
,p.name
,p.description
,p.start_date
,p.end_date
,case p.is_complete_yn
when 'Y' then 'Yes'
when 'N' then 'No'
end as is_complete
,case p.is_complete_yn
when 'Y' then 'fa-check-circle u-success-text'
when 'N' then 'fa-times-circle u-danger-text'
end as is_complete_icon
from eba_demo_proj_tasks p
</pre>
The last two columns (is_complete and is_complete_icon) are the ones that matter in this example. IS_COMPLETE will provide a proper English word instead of a single letter representing the status. IS_COMPLETE_ICON will translate the single letter to the appropriate icon to be shown in the column values. This column (IS_COMPLETE_ICON) will be hidden because we will use it only to format the column values.
</p>
<p>
First the plain vanilla version, no icon yet. The screenshots below show the text of the status, as well as the filter that it will produce.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-6eL9i0J4HjdyDZ5xA7hSTlaSkH1fa6yq2YKS1DpRi2VmshoRPpK1jeWju1TimP9YsaXG79sJEkdeniZmQdOY2KgNBcw39l4YORFacMFFgXaIWRHGmEZVGEqj9aSYQ8naYD52eoxTEtr5f9o3S_qPh9tMmSphijFZnicJfLCruAL3ioq78OGOZmzj8w/s1600/Interactive_Grid_Tasks.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" data-original-height="754" data-original-width="708" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi-6eL9i0J4HjdyDZ5xA7hSTlaSkH1fa6yq2YKS1DpRi2VmshoRPpK1jeWju1TimP9YsaXG79sJEkdeniZmQdOY2KgNBcw39l4YORFacMFFgXaIWRHGmEZVGEqj9aSYQ8naYD52eoxTEtr5f9o3S_qPh9tMmSphijFZnicJfLCruAL3ioq78OGOZmzj8w/s1600/Interactive_Grid_Tasks.png"/></a></div><div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjj530Up1DSvCGvH_7qvbQOEbjK-oMM-56acNaQjOzRtkurJa1B0A2auDzBKjSPkkwX6jP8YK6YC5rwxdc1iTQIT8nC3ahiQgvswWjKE6VO4rhdrYwf5-MrG0Kkh_Y4WlXPUWvdzWuLYyWC_uQt6Ah4oOQVIlC_DPZWQKap1uQ02AqJVzi4yptDzqwiuw/s1600/Interactive_Grid_Tasks-2.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" data-original-height="754" data-original-width="708" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjj530Up1DSvCGvH_7qvbQOEbjK-oMM-56acNaQjOzRtkurJa1B0A2auDzBKjSPkkwX6jP8YK6YC5rwxdc1iTQIT8nC3ahiQgvswWjKE6VO4rhdrYwf5-MrG0Kkh_Y4WlXPUWvdzWuLYyWC_uQt6Ah4oOQVIlC_DPZWQKap1uQ02AqJVzi4yptDzqwiuw/s1600/Interactive_Grid_Tasks-2.png"/></a></div>
</p>
<p>
Next we will change the column from "Text" to "HTML Expression", this allows to change the column value to whatever HTML you can come up with. As I want to shown an icon, this is the HTML that I used:
<pre class="brush: js">
<span aria-hidden="true" class="fa &IS_COMPLETE_ICON."></span>
</pre>
This will result in a properly formatted column, the coveted icon that we were looking for, but it is no longer possible to filter on that column.
Below are screenshots of the effect that it has.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTA2y2I9o9WkaRTfc_ZTdUaYWqe7jTkuPT541euRT0BJK-b_RexzrnBfiaQ1WIIWCVLnSPHwrCANWIMQSiR_oY5DbYDFIrfszo44xehSISXUYSQddkuz-UY2iDpHhwt1uE3FzJJf8bSE_kvVS3U2_M3RuJ8ZxnEFymV7joNNxT5UsBK8xNuBT3F_0haw/s1600/Interactive_Grid_Tasks-3.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" data-original-height="754" data-original-width="708" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTA2y2I9o9WkaRTfc_ZTdUaYWqe7jTkuPT541euRT0BJK-b_RexzrnBfiaQ1WIIWCVLnSPHwrCANWIMQSiR_oY5DbYDFIrfszo44xehSISXUYSQddkuz-UY2iDpHhwt1uE3FzJJf8bSE_kvVS3U2_M3RuJ8ZxnEFymV7joNNxT5UsBK8xNuBT3F_0haw/s1600/Interactive_Grid_Tasks-3.png"/></a></div><div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkGjF379eSdea_LU9KkckdLAQeuylln2IrFWV6kUHDDZs4YGXHXevn47lzic8YqOOF_dOZZljz6ww-XONA25cz6aPPOxrh-tZGJlWcGHU0ezNhOxw5pKDdFYuKzRPzfr7GqB-2CJbCIiOvU0qFggX4p1UqV-hujbF5iEDZE46bd1TgEvoBcE91REDyiA/s1600/Interactive_Grid_Tasks-4.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" data-original-height="754" data-original-width="708" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjkGjF379eSdea_LU9KkckdLAQeuylln2IrFWV6kUHDDZs4YGXHXevn47lzic8YqOOF_dOZZljz6ww-XONA25cz6aPPOxrh-tZGJlWcGHU0ezNhOxw5pKDdFYuKzRPzfr7GqB-2CJbCIiOvU0qFggX4p1UqV-hujbF5iEDZE46bd1TgEvoBcE91REDyiA/s1600/Interactive_Grid_Tasks-4.png"/></a></div>
</p>
<p>
After struggling with this problem for a while, I reached out on <a href="https://twitter.com/i/communities/1497376689797537796" target="_blank">the Twitter APEX Community</a> and I was very happy that <a href="https://twitter.com/WhenOthers" target="_blank">Andreas Wismann</a> replied because his solution works like a charm.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjMe4eGCe7v_I6Re8lQ9wW9Dz8WkN0igBf0txFrZjwBXT1Qu992UkHVJKLjS1g-BVm5Xap_P7W2UItMg8nLZQ-dXMLtZjh_0M88sd9CrnHKGXGyrwS4Mz_X5cB6HIF4JnHxlz9hA5e-JW0HIVfHhyXGG14JptRuK1OMCoBL2u5y3p1J12JpduyHkfl3A/s2040/Alex_Nuijten_on_Twitter___Can_you_filter_on_a_column_in_an_Interactive_Grid_which_is_an_HTML_Expression__Or____Can_you_specify_an_HTML_Expression_for_a_Text_Column__Which_has_the_same_effect__I_think___orclapex____Twitter.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" height="320" data-original-height="2040" data-original-width="1190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjMe4eGCe7v_I6Re8lQ9wW9Dz8WkN0igBf0txFrZjwBXT1Qu992UkHVJKLjS1g-BVm5Xap_P7W2UItMg8nLZQ-dXMLtZjh_0M88sd9CrnHKGXGyrwS4Mz_X5cB6HIF4JnHxlz9hA5e-JW0HIVfHhyXGG14JptRuK1OMCoBL2u5y3p1J12JpduyHkfl3A/s320/Alex_Nuijten_on_Twitter___Can_you_filter_on_a_column_in_an_Interactive_Grid_which_is_an_HTML_Expression__Or____Can_you_specify_an_HTML_Expression_for_a_Text_Column__Which_has_the_same_effect__I_think___orclapex____Twitter.png"/></a></div>
As the code doesn't show properly in the screenshot from Twitter, and also for easy Copy-Paste, this is the code that you put in the JavaScript Initialization Code of the column where you want to show the icon
<pre class="brush: js">
function( config ) {
config.defaultGridColumnOptions = {
cellTemplate: '<span aria-hidden="true" class="fa &IS_COMPLETE_ICON."></span>'
};
return config;
}
</pre>
And the final result in pictures:
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhikj9tAzbJZkqQlxCfiO19jsfocL4M94M5MakbYvhCIEaRpHa-QMrbRY_4P9MncWcavRaaUom2bNGYV5C19Nx95sPR7NHomm5LcF8B7JWk9r5sGUaC4olCEnyHxwvnbJplukZoMPqUbQgA6pEDQR11xxmcmHcUPNKL9uyJ38M-usTPRP1gGiv7pGwjOg/s1600/Interactive_Grid_Tasks-5.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" data-original-height="754" data-original-width="708" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhikj9tAzbJZkqQlxCfiO19jsfocL4M94M5MakbYvhCIEaRpHa-QMrbRY_4P9MncWcavRaaUom2bNGYV5C19Nx95sPR7NHomm5LcF8B7JWk9r5sGUaC4olCEnyHxwvnbJplukZoMPqUbQgA6pEDQR11xxmcmHcUPNKL9uyJ38M-usTPRP1gGiv7pGwjOg/s1600/Interactive_Grid_Tasks-5.png"/></a></div>
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-41662664642620316252022-03-17T11:44:00.002+01:002022-03-17T11:44:24.293+01:00ORA-24017: cannot enable enqueue on QUEUE, %s is an exception queue<p>
Generating documents can take quite some time, which I prefer to do in the background. To facilitate this we use Advanced Queueing (DBMS_AQ). We place the orders that we want to have a document for in a Queue. There is a callback function on the Queue which will take care of the document generation (we use <a href="https://www.apexofficeprint.com/index.html" target="_blank">APEX Office Print</a>).<br>
If the above is abacadabra to you, I suggest you read up on DBMS_AQ and callback functions on <a href="http://www.oracle-developer.net/display.php?id=411" target="_blank">this blog by Adrian Billington.</a>
</p>
<p>
Normally this works flawlessly, until recently when there was a typo in the URL to call AOP.<br>
When the callback function would try to call AOP, it was unable to do so. Eventually this would cause that the message in the Queue would be moved to the Exception Queue.
</p>
<p>
So now we have all these messages in the Exception Queue and still no documents. The plan was to Dequeue the messages from the Exception Queue to the Normal Queue and have them processed again, of course after the typo was corrected.<br>
By default it is not possible to Dequeue messages from the Exception Queue. So step one would be to enable Dequeue for the Exception Queue.
</p>
<p>
First step: find out the name of the Exception Queue:
</p>
<pre class="brush: sql">
SQL> select name
2 ,queue_table
3 ,queue_type
4 from user_queues
5* /
NAME QUEUE_TABLE QUEUE_TYPE
___________________________ ________________________ __________________
AQ$_ORDER_DOCUMENTS_QUEUE_E ORDER_DOCUMENTS_QUEUE EXCEPTION_QUEUE
DOCUMENTS_QUEUE ORDER_DOCUMENTS_QUEUE NORMAL_QUEUE
</pre>
<p>
Next step, enable the Dequeue on the Exception Queue:
</p>
<pre class="brush: sql">
SQL> begin
2 dbms_aqadm.start_queue (queue_name => 'AQ$_ORDER_DOCUMENTS_QUEUE_E'
3 ,dequeue => true
4 );
5 end;
6* /
Error starting at line : 1 in command -
begin
dbms_aqadm.start_queue (queue_name => 'AQ$_ORDER_DOCUMENTS_QUEUE_E'
,dequeue => true
);
end;
Error report -
ORA-24017: cannot enable enqueue on QUEUE, AQ$_ORDER_DOCUMENTS_QUEUE_E is an exception queue
ORA-06512: at "SYS.DBMS_AQADM", line 747
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8626
ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 124
ORA-06512: at "SYS.DBMS_AQADM_SYS", line 8540
ORA-06512: at "SYS.DBMS_AQADM", line 742
ORA-06512: at line 2
24017. 00000 - "cannot enable enqueue on QUEUE, %s is an exception queue"
*Cause: User tried to enable enqueueing to an exception queue.
*Action: None.
</pre>
<p>
However this doesn't work, as you can see above.<br>
As I didn't read the error message properly, I thought it said that <strong>Dequeue</strong> was not allowed on an Exception Queue, but it was complaining about the <strong>Enqueue</strong>.<br>
The default value for the Enqueue argument in DBMS_ADADM.START_QUEUE is true, and this is not allowed.<br>
</p>
<pre class="brush: sql">
SQL> begin
2 dbms_aqadm.start_queue (queue_name => 'AQ$_ORDER_DOCUMENTS_QUEUE_E'
3 ,enqueue => false
4 ,dequeue => true
5 );
6 end;
7* /
PL/SQL procedure successfully completed.
</pre>
<p>
Including the Enqueue argument in the call to DBMS_AQADM.START_QUEUE remedies this.
</p>
<p>
Now it is trivial to Dequeue from the Exception Queue and Enqueue on the Normal Queue:
</p>
<pre class="brush: sql">
declare
l_dequeue_options dbms_aq.dequeue_options_t;
l_message_properties dbms_aq.message_properties_t;
l_message_handle raw(16);
l_payload document_queue_ot;
l_enqueue_options dbms_aq.enqueue_options_t;
begin
for r in (select q.msgid
from order_documents_queue q
where q_name = 'AQ$_ORDER_DOCUMENTS_QUEUE_E')
loop
l_dequeue_options.msgid := r.msgid;
dbms_aq.dequeue (queue_name => 'AQ$_ORDER_DOCUMENTS_QUEUE_E'
,dequeue_options => l_dequeue_options
,message_properties => l_message_properties
,payload => l_payload
,msgid => l_message_handle
);
dbms_aq.enqueue (queue_name => 'documents_queue'
,enqueue_options => l_enqueue_options
,message_properties => l_message_properties
,payload => l_payload
,msgid => l_message_handle
);
end loop;
end;
</pre>
<p>
After the processing of the Normal Queue, there were still a few messages that popped back into the Exception Queue. After careful examination, it was determined that they could be purged from the Exception Queue:
</p>
<pre class="brush: sql">
declare
po_t dbms_aqadm.aq$_purge_options_t;
begin
dbms_aqadm.purge_queue_table ('ORDER_DOCUMENTS_QUEUE'
,'qtview.queue = ''AQ$_ORDER_DOCUMENTS_QUEUE_E'''
, po_t
);
end;
/
</pre>
<h2>Links</h2>
<ul>
<li>
<a href="http://www.oracle-developer.net/display.php?id=411" target="_blank">Adrian Billington's Introduction to Advanced Queueing</a>
</li>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_AQADM.html" target="_blank">DBMS_AQADM</a>
</li>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/21/arpls/DBMS_AQ.html" target="_blank">DBMS_AQ</a>
</li>
</ul>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690129.586423201388985 -30.2945599 73.695617198611018 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-20869417929124780322022-03-02T10:35:00.001+01:002022-03-02T10:35:56.645+01:00Modify a JSON structure with a single Update<p>
Recently there was a question on the Oracle Community Forum related to JSON, more specifically how to change the structure of it. The link to the Question is at the bottom of this post.<br>
The starting point is the following JSON:
<pre class="brush: js">
{
"tktKey": "123456",
"attlist1": [
{
"attkey": "123",
"attval": "abc"
},
{
"attkey": "456",
"attval": "def"
}
],
"attlist2": [
{
"attkey": "098",
"attval": "xyz"
}
]
}
</pre>
The task at hand was to combine both attlist1 and attlist2 into a single attlist.<br>
To do this the JSON object that is contained in attlist2 need to be extracted and place in attlist1. After that the element attlist2 can be removed from the structure. When this is done, there is an element with attlist1 with three JSON objects in it. Finally the element attlist1 needs to be renamed to attlist.<br>
All of these actions can be done with a single SQL statement using JSON_TRANSFORM.<br>
<br>
Let's first create a table to hold the original JSON
<pre class="brush: sql">
SQL> create table js
2 (str clob
3 )
4* /
Table JS created.
</pre>
And insert the JSON Object in there.
<pre class="brush: sql">
SQL> insert into js values ('{ "tktKey": "123456", "attlist1": [ { "attkey": "123", "attval": "abc" }, { "attkey": "456","attval": "def" } ], "attlist2": [{"attkey": "098", "attval": "xyz" } ]}')
2* /
1 row inserted.
</pre>
Here comes the magic moment:
<pre class="brush: sql">
SQL> update js
2 set str= json_transform
3 (js.str
4 ,append '$.attlist1'
5 = json_query (js.str, '$.attlist2[*]')
6 ,remove '$.attlist2'
7 ,rename '$.attlist1' ='attlist')
8* /
1 row updated.
</pre>
With JSON_QUERY (line 5) I extract the JSON Object from attlist2, which is added (APPEND, line 4) to attlist1.<br>
Attlist2 is removed from the JSON object (line 6) and finally on line 7 attlist1 is renamed to attlist.<br><br>
And to view the results, in a nice readable format:
<pre class="brush: sql">
SQL> select json_query (j.str, '$' returning clob pretty) as result
2 from js j
3* /
RESULT
_______
{
"tktKey" : "123456",
"attlist" :
[
{
"attkey" : "123",
"attval" : "abc"
},
{
"attkey" : "456",
"attval" : "def"
},
{
"attkey" : "098",
"attval" : "xyz"
}
]
}
</pre>
</p>
<p>
The more I use the JSON functionality that the Oracle database has to offer, the more I like them!
</p>
<pre class="brush: sql">
</pre>
<h2>Links</h2>
<ul>
<li>
<a href="https://community.oracle.com/tech/developers/discussion/4495440/how-to-combine-two-json-array-list-into-single-array-list" target="_blank">Oracle Community Question</a>
</li>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/JSON_QUERY.html#GUID-6D396EC4-D2AA-43D2-8F5D-08D646A4A2D9" target="_blank">JSON_QUERY</a>
</li>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/21/sqlrf/JSON_TRANSFORM.html#GUID-DD2A821B-C688-4310-81B5-5F45090B9366" target="_blank">JSON_TRANSFORM</a>
</li>
</ul>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-1893659125588441322021-10-12T09:52:00.001+02:002021-10-12T09:52:41.610+02:00Everybody needs a break: APEX_WEB_SERVICE #JoelKallmanDay<p>
When <a href="https://oracle-base.com/blog/2021/10/04/joel-kallman-day-announcement/" target="_blank">Tim Hall announced</a> that there will be a Community blog-day in memory of Joel Kallman, I knew I wanted to participate.</br>
For this post, I'll keep it technical (somewhat).
</p>
<p>
I'm old enough to remember what a hassle it was to call a webservice from PL/SQL and now appreciate how easy it is. So, this post is about APEX_WEB_SERVICE.<br>
To show how easy it is to work with APEX_WEB_SERVICE and as everybody loves a holiday, I decided to combine the two.<br>
There is a webservice which gives you public holidays (per country) which can be found at: <a href="https://date.nager.at/" target="_blank">https://date.nager.at/</a>.
</p>
<p>
For this example I created a table which holds the date and the description of the holiday
<pre class="brush:sql">
create table my_holidays
(holiday date
,description varchar2(500)
)
</pre>
</p>
<p>
And the procedure to populate the table for a given year is as follows:
<pre class="brush: sql">
create or replace
procedure populate_holidays (p_year in number)
is
--
l_url constant varchar2(500) := 'https://date.nager.at/api/v3/publicholidays';
l_response blob;
--
begin
l_response := apex_web_service.make_rest_request_b (p_url => l_url||'/'||to_char (p_year)||'/NL'
,p_http_method => 'GET'
);
insert into my_holidays
(holiday
,description
)
select dt
,name
from json_table (l_response, '$[*]'
columns (dt date path '$.date'
,name varchar2(100) path '$.name'
,nested path '$.counties[*]'
columns (county path '$')))
;
end populate_holidays;
</pre>
</p>
<p>
Look at the elegance of the call to APEX_WEB_SERVICE, a very straightforward call to the URL of the Public Holiday API.<br>
An INSERT-SELECT with a JSON_TABLE completes the procedure.
</p>
<p>
Calling the procedure with
<pre class="brush:sql">
begin
populate_holidays (p_year => 2022);
end;
/
</pre>
yields the following result:
<pre class:"brush: sql">
select *
from my_holidays
/
01-01-22 New Year's Day
15-04-22 Good Friday
17-04-22 Easter Sunday
18-04-22 Easter Monday
27-04-22 King's Day
05-05-22 Liberation Day
26-05-22 Ascension Day
05-06-22 Pentecost
06-06-22 Whit Monday
25-12-22 Christmas Day
26-12-22 St. Stephen's Day
</pre>
Never knew that the second day of Christmas (yes, we have that in The Netherlands) is called St. Stephen's Day.
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com3tag:blogger.com,1999:blog-1271763227002553835.post-20240558155347827792021-07-14T16:42:00.006+02:002021-07-15T08:21:01.716+02:00SQLCL 21.2: APEX LIST Command gives error (and a work around)<p>
Recently I've upgraded to the latest SQLCl release, currently 21.2, and found a small bug.<br>
To quickly export APEX applications I often use the "APEX"-commands in SQLCl. Exporting an APEX application is done by issuing the following command:
<pre class="brush: sql">
apex export 1000
</pre>
where you pass in the number of the application that you want to export.<br>
This still works fine, but I don't always remember the application id, so I use "APEX list" to show a list of APEX-applications. This command is failing in this release unfortunatelly.<br>
To demonstrate that this is happening:
<pre class="brush: sql">
SQL> show version
Oracle SQLDeveloper Command-Line (SQLcl) version: 21.2.0.0 build: 21.2.0.169.1529
</pre>
The "APEX" and "APEX list" commands interchangable.
<pre class="brush: sql">
SQL> apex
2021-07-14 08:55:35.327 SEVERE oracle.dbtools.raptor.newscriptrunner.ScriptExecutor run java.base/java.util.Collections.nCopies(Collections.java:5027)
java.lang.IllegalArgumentException: List length = -3
at java.base/java.util.Collections.nCopies(Collections.java:5027)
at oracle.dbtools.raptor.utils.AnsiColorListPrinter.printHeaders(AnsiColorListPrinter.java:90)
at oracle.dbtools.raptor.utils.ListPrinter.print(ListPrinter.java:96)
at oracle.dbtools.raptor.utils.ListPrinter.print(ListPrinter.java:68)
at oracle.dbtools.raptor.utils.ListPrinter.print(ListPrinter.java:62)
at oracle.dbtools.raptor.utils.ListPrinter.printListofList(ListPrinter.java:208)
at oracle.dbtools.raptor.newscriptrunner.commands.ApexCmd.printList(ApexCmd.java:250)
at oracle.dbtools.raptor.newscriptrunner.commands.ApexCmd.handleEvent(ApexCmd.java:173)
at oracle.dbtools.raptor.newscriptrunner.CommandRegistry.fireListeners(CommandRegistry.java:346)
at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.run(ScriptRunner.java:226)
at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:344)
at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:227)
at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.processLine(SqlCli.java:341)
at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.runSqlcl(SqlCli.java:1049)
at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.main(SqlCli.java:369)
</pre>
As you can see using the "APEX" command in SQLCl raises some kind of Java-exception.<br>
And if you run into an exception, what do you do? Of course you run to tweet about it... knowing that <a href="https://twitter.com/thatjeffsmith" target="_blank">Jeff Smith, Product Manager at Oracle</a> is always listening (or at least it seems that way).
He replied with a solution to my problem
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDr3V30OfpoMqwLYy8URqR9pwCDsDzEAn8wTyKFf9ThmO4nptxQP_nt30YZgPNmyxOffRooPo9SPuGi1ZYw248oRImCkqCaDCK_4G_GLY3-lk0e7Ply3m9whqy7V0zHPmvLXiDCJRDEMW6/s1160/Jeff_Smith_%25F0%259F%258D%25BB_on_Twitter____alexnuijten__oraclesqlcl_Don_t_thank_me__thank_the_developer__totierne____he_s_shy__but_awesome____Twitter.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="738" data-original-width="1160" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjDr3V30OfpoMqwLYy8URqR9pwCDsDzEAn8wTyKFf9ThmO4nptxQP_nt30YZgPNmyxOffRooPo9SPuGi1ZYw248oRImCkqCaDCK_4G_GLY3-lk0e7Ply3m9whqy7V0zHPmvLXiDCJRDEMW6/s320/Jeff_Smith_%25F0%259F%258D%25BB_on_Twitter____alexnuijten__oraclesqlcl_Don_t_thank_me__thank_the_developer__totierne____he_s_shy__but_awesome____Twitter.png"/></a></div>
And using the ALTER SESSION command to set the NLS_DATE_FORMAT, things work again.
<pre class="brush: sql">
SQL> alter session set nls_date_format = 'DD-MON-RR HH:MI:SSpm'
2* /
Session altered.
</pre>
<pre class="brush: sql">
SQL> apex
WORKSPACE_ID WORKSPACE APPLICATION_ID APPLICATION_NAME BUILD_STATUS LAST_UPDATED_ON
___________________ ____________ _________________ __________________________________________ __________________ _______________________
1500550789501687 ALEX 1000 Master Application Run and Develop 18-JUN-21 11:42:49am
</pre>
And no, it doesn't have to be that particular format mask for NLS_DATE_FORMAT. You can also a different format mask, like 'dd-mm-yyyy hh24:mi'. It changes how the last column is formatted in your output.
</p>
Thanks to <a href="https://twitter.com/totierne" target="_blank">Turloch O'Tierney</a> to come up with this soluttion.
<div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvYl5cdCBQ86YUr8eGuod7fxwlFTfZilIvJXtA072BR8jnb2TEpk04sopkn86zzuEc_sIiwmBmKvC_yNAGhgaIWXoezsOTv0hYOiLZoqo8oTyRIsfLvf64Qus4MR15VoBsduzVyE7ISY0m/s1174/Jeff_Smith_%25F0%259F%258D%25BB_on_Twitter____alexnuijten__oraclesqlcl_Don_t_thank_me__thank_the_developer__totierne____he_s_shy__but_awesome____Twitter-2.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="" border="0" width="400" data-original-height="676" data-original-width="1174" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhvYl5cdCBQ86YUr8eGuod7fxwlFTfZilIvJXtA072BR8jnb2TEpk04sopkn86zzuEc_sIiwmBmKvC_yNAGhgaIWXoezsOTv0hYOiLZoqo8oTyRIsfLvf64Qus4MR15VoBsduzVyE7ISY0m/s400/Jeff_Smith_%25F0%259F%258D%25BB_on_Twitter____alexnuijten__oraclesqlcl_Don_t_thank_me__thank_the_developer__totierne____he_s_shy__but_awesome____Twitter-2.png"/></a></div>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-31420539138921889562020-11-12T15:23:00.002+01:002020-11-12T15:23:21.430+01:00Automatically Not Null when adding a DEFAULT Clause<p>
Today I learned something new about the DEFAULT ON NULL clause.<br>
Since Oracle database release 12 it is possible to define a DEFAULT ON NULL clause which is used when a NULL is provided as value. The thing I didn't know is that the column is automatically defined as NOT NULL.<br>
Is this documented behaviour? Yes, it is. Quote from <a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/CREATE-TABLE.html#GUID-F9CE0CC3-13AE-4744-A43C-EAC7A71AAAB6" target="_blank">the documentation</a>:
<blockquote>
When you specify ON NULL, the NOT NULL constraint and NOT DEFERRABLE constraint state are implicitly specified.
</blockquote>
</p>
<pre class="brush: sql">
SQL> select banner
2 from v$version
3 /
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
</pre>
<pre class="brush: sql">
SQL> create table t
2 (col1 varchar2(10) default on null 'hello')
3 /
Table T created.
</pre>
<pre class="brush: sql">
SQL> desc t
Name Null? Type
---- -------- ------------
COL1 NOT NULL VARCHAR2(10)
</pre>
<p>
And in case you are wondering if you can change the column afterwards to NULL, the answer is "no".
<pre class="brush: sql">
SQL> alter table t
2 modify (col1 null)
3 /
Error starting at line : 1 in command -
alter table t
modify (col1 null)
Error report -
ORA-01451: column to be modified to NULL cannot be modified to NULL
01451. 00000 - "column to be modified to NULL cannot be modified to NULL"
*Cause: the column may already allow NULL values, the NOT NULL constraint
is part of a primary key or check constraint.
*Action: if a primary key or check constraint is enforcing the NOT NULL
constraint, then drop that constraint.
</pre>
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-10770450303073070682020-10-22T09:00:00.007+02:002020-10-22T14:09:53.194+02:00Qualified Expressions Quite Puzzling - Part Three: Bug!<p>
After some back and forth with <a href="https://twitter.com/patch72" target="_blank">Patrick Barel</a> and <a href="https://twitter.com/sfonplsql" target="_blank">Steven Feuerstein</a>, it became clear that it most likely is a bug with Qualified Expressions.<br>
A simple case to reproduce the bug is below
</p>
<pre class="brush: sql">
SQL> set serveroutput on
SQL> declare
2 type tbl_t is table of pls_integer
3 index by pls_integer;
4 l_tbl tbl_t;
5 l_idx pls_integer;
6 begin
7 for i in 1..3
8 loop
9 l_tbl := tbl_t (i => i);
10 end loop;
11 --
12 l_idx := l_tbl.first;
13 while l_idx is not null
14 loop
15 sys.dbms_output.put_line (to_char (l_idx)
16 ||' - '||
17 to_char (l_tbl(l_idx))
18 );
19 l_idx := l_tbl.next (l_idx);
20 end loop;
21 end;
22 /
1 - 1
2 - 2
3 - 3
PL/SQL procedure successfully completed.
</pre>
Why it is the above a bug? If you do the a similar assignment, but without a loop, the outcome is different (and most likely correct)
<pre class="brush: sql">
SQL> declare
2 type tbl_t is table of pls_integer
3 index by pls_integer;
4 l_tbl tbl_t;
5 l_idx pls_integer;
6 begin
7 l_tbl := tbl_t (1 => 1);
8 l_tbl := tbl_t (2 => 2);
9 l_tbl := tbl_t (3 => 3);
10 --
11 l_idx := l_tbl.first;
12 while l_idx is not null
13 loop
14 sys.dbms_output.put_line (to_char (l_idx)
15 ||' - '||
16 to_char (l_tbl(l_idx))
17 );
18 l_idx := l_tbl.next (l_idx);
19 end loop;
20 end;
21 /
3 - 3
PL/SQL procedure successfully completed.
</pre>
<p>
Bug 32057533 - ASSIGNMENT TO COLLECTION USING QUALIFIED EXPRESSION APPENDS INSTEAD OF REPLACING.
</p>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-77241446204547464002020-10-21T16:26:00.040+02:002020-10-22T08:51:05.065+02:00Qualified Expressions Quite Puzzling - Part Two<p>
After I published my blog <a href="https://nuijten.blogspot.com/2020/10/qualified-expressions-quite-puzzling.html" target="_blank">about Qualified Expressions</a>, my dear friend <a href="https://twitter.com/patch72" target="_blank">Patrick Barel</a> made a comment that my remarks about the Cursor For Loop was not at all what he expected to happen.
</p>
<p>
This is the code block that Patrick referred to.
<pre class="brush: sql">
10 for r in (select * from t)
11 loop
12 l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
13 ,num2 => r.id));
14 end loop;
</pre>
with these results
<pre class="brush: sql">
1 - 1
2 - 2
3 - 3
</pre>
A different way of looking at this code example might expect the outcome to be only one record, being the last one.<br>
Each iteration would replace the complete Associative Array with a new instance of it.<br><br>
So, there was another Puzzling thing in the original blogpost that I hadn't noticed before, probably because I wasn't looking for it. Why isn't the whole Associative Array replaced? It seems that this code would append values in the Associative Array and this was my objective. Strange indeed.<br>
Anyhoo, probably the better way of working with Qualified Expressions to append values to the Associative Array, the index should be on the other side of the assignment:
<pre class="brush: sql">
10 for r in (select * from t)
11 loop
12 l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
13 ,num2 => r.id);
14 end loop;
</pre>
</p>
<p>
When inserting the pointless conditions inside the For Loop, all values from the table are inserted into the Associative Array
<pre class="brush: sql">
10 for r in (select * from t)
11 loop
12 if r.yes_no = 'Y'
13 then
14 l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
15 ,num2 => r.id);
16 else
17 l_tbl (l_tbl.count + 1) := rec_t (num1 => r.id
18 ,num2 => r.id);
19 end if;
20 end loop;
</pre>
with all records from the table that get populated in the Associative Array:
<pre class="brush: sql">
1 - 1
2 - 2
3 - 3
</pre>
</p>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-71233863069818356782020-10-20T16:13:00.002+02:002020-10-22T08:33:25.787+02:00Qualified Expressions Quite Puzzling<p>
Qualified expressions for PL/SQL were introduced in version 18 of the Oracle database. They provide a clearer way of working with collections. At the bottom of this post is the link to the official documentation.<br>
However recently I encountered something that left me puzzled and I still can't explain the reason why. So this blogpost is just about my observations and I welcome any comments.
</p>
<p>
First a short demo of how you can work with Qualified Expressions.
<pre class="brush: sql">
SQL> select banner
2 from v$version;
BANNER
___________________________________________________________________
Oracle Database 18c EE High Perf Release 18.0.0.0.0 - Production
</pre>
<pre class="brush: sql">
SQL> set serveroutput on
SQL> declare
2 type rec_t is record
3 (num1 number
4 ,num2 number);
5 type tbl_t is table of rec_t
6 index by pls_integer;
7 l_tbl tbl_t;
8 l_idx pls_integer;
9 begin
10 for i in 1..3
11 loop
12 l_tbl := tbl_t (i => rec_t (num1 => i
13 ,num2 => i));
14 end loop;
15 --
16 l_idx := l_tbl.first;
17 while l_idx is not null
18 loop
19 sys.dbms_output.put_line (to_char (l_tbl(l_idx).num1)
20 ||' - '||
21 to_char (l_tbl(l_idx).num2)
22 );
23 l_idx := l_tbl.next (l_idx);
24 end loop;
25 end;
26 /
</pre>
On line 2 - 4 a Record Type is declared with two numeric attributes. Line 5 and 6 define an Associative Array of the Record Type.<br>
The Qualified Expression is on lines 12 and 13.<br>
In the loop we assign the loop index variable to the Record Type using the Qualified Expression syntax.<br>
To view the content of the Associative Array, lines 16 through 24 are used.
When running this block of code the output is:
<pre class="brush: sql">
1 - 1
2 - 2
3 - 3
</pre>
<strike>No surprises here, it works as expected.</strike> Well, actually it is surprising take a look at <a href="https://nuijten.blogspot.com/2020/10/qualified-expressions-quite-puzzling_21.html" target="_blank">the follow-up blog.</a>
</p>
<h2>Cursor For Loop</h2>
<p>
The previous example used a Numeric For Loop, but I observed strange things when using a Cursor For Loop.
<pre class="brush: sql">
SQL> drop table t purge
2 /
Table T dropped.
SQL> create table t
2 (id number
3 ,yes_no varchar2(1)
4 )
5 /
Table T created.
SQL> insert into t values (1, 'Y');
1 row inserted.
SQL> insert into t values (2, 'N');
1 row inserted.
SQL> insert into t values (3, 'Y');
1 row inserted.
SQL> commit;
Commit complete.
</pre>
The table contains three records with different Yes/No-indicators. Instead of showing the complete block of code I will only show the changes that I made using the previous example.
<pre class="brush: sql">
10 for r in (select * from t)
11 loop
12 l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
13 ,num2 => r.id));
14 end loop;
</pre>
The Numeric For Loop is removed and a Cursor For Loop is inserted in the code block. Instead of using the index-variable from the first code example, I am using the current count of the Associative Array and add one to that (l_tbl.count + 1).<br>
The results are the same as the first code block.
<pre class="brush: sql">
1 - 1
2 - 2
3 - 3
</pre>
</p>
<h2>...but you said Strange Things?</h2>
<p>
Yes, I did... but so far, it seems normal.<br>
The strange thing happens when a condition is added inside the For-Loop.<br>
<pre class="brush: sql">
10 for r in (select * from t)
11 loop
12 if r.yes_no = 'Y'
13 then
14 l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
15 ,num2 => r.id));
16 else
17 l_tbl := tbl_t (l_tbl.count + 1 => rec_t (num1 => r.id
18 ,num2 => r.id));
19
20 end if;
21 end loop;
</pre>
The table has a Yes_No column and inside the For Loop, the value of this column is used.<br>
When you examine the code above, you will notice that it will assign a value to the collection regardless of the value of the column. The assignment is exactly the same in lines 14-15 and 17-18. The content of the Associative Array after the loop should be the same as before, but this is not the case.<br>
The result looks like this:
<pre class="brush: sql">
1 - 1
3 - 3
</pre>
What happened to the second record?... Your guess is as good as mine.<br>
</p>
<p>
I've tried several variations, like using a local counter to assign instead of the collection.count + 1, but that doesn't make a difference.<br>
Like said before: I welcome your comments for more insight into this strange behavior.
</p>
<p>
The same is reproducable on Oracle 19...
<pre class="brush: sql">
SQL> select banner
2 from v$version
3 /
BANNER
_________________________________________________________________________
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
</pre>
</p>
<h2>Links</h2>
<ul>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/19/lnpls/plsql-collections-and-records.html#GUID-4B379569-4068-4DB1-9D69-E706F4AC5758" target="_blank">Oracle 19 Qualified Expressions</a>
</li>
</ul>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com6Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-29264420918363145382020-06-05T10:59:00.003+02:002020-06-05T11:26:29.080+02:00Conditional Compilation; One Dynamic Package Specification<p>
When you want to make a distinction between different databases (like Development, Test, Acceptance, and Production) and have a single code base, conditional compilation can help out.
Most of the time I read about conditional compilation when the use case is to eliminate code from the production environment, like instrumentation, which is of course a legit scenario.
Some functionality should never run on Production, but likewise some functionality should never run on Dev, Test or Acceptance.
</p>
<p>
For example; we sent out emails to customers about their order statusses. This functionality should only be run on the Production database, and never-ever on any of the other databases. Initially there was a toggle in the application dictating if the emails should go to customers or to a dummy email address within the company. How fool-proof is this solution? Short answer: it's not. Sooner or later there will be someone who sets the toggle to "Sent to Customers" in the wrong environment.
Of course there are many more examples like this one.
</p>
<p>
Anyway, to determine the environment which can be used for conditional compilation, I use a package specification which is compiled dynamically.
Using the database name, which can be found in the USERENV context, the value is determined.
To output a textual representation of a boolean value, I use LOGGER.TOCHAR to output "true", "false" or "null", but of course you can also write your own boolean-to-text-translation.
</p>
<pre class="brush: sql">
declare
l_pack_spec varchar2(32767);
l_env varchar2(25);
begin
l_env := sys_context ('userenv', 'db_name');
l_pack_spec := 'create or replace package environment_pkg'||chr(10);
l_pack_spec := l_pack_spec ||'is'||chr(10);
l_pack_spec := l_pack_spec ||' --=='||chr(10);
l_pack_spec := l_pack_spec ||' -- Environment Information, useful for conditional compilation'||chr(10);
l_pack_spec := l_pack_spec ||' development constant boolean := '||lower (logger.tochar (l_env = 'DEV'))||';'||chr(10);
l_pack_spec := l_pack_spec ||' test constant boolean := '||lower (logger.tochar (l_env = 'TST'))||';'||chr(10);
l_pack_spec := l_pack_spec ||' acceptance constant boolean := '||lower (logger.tochar (l_env = 'ACC'))||';'||chr(10);
l_pack_spec := l_pack_spec ||' production constant boolean := '||lower (logger.tochar (l_env = 'PRD'))||';'||chr(10);
l_pack_spec := l_pack_spec ||' --=='||chr(10);
l_pack_spec := l_pack_spec ||'end environment_pkg;'||chr(10);
execute immediate l_pack_spec;
end;
</pre>
<p>
When you run the code above, the resulting code will be something like:
</p>
<pre class="brush: sql">
package environment_pkg
is
--==
-- Environment Information, useful for conditional compilation
development constant boolean := true;
test constant boolean := false;
acceptance constant boolean := false;
production constant boolean := false;
--==
end environment_pkg;
</pre>
<p>
Indicating that the current database is the Development database.
</p>
<p>
Now you can use conditional compilation to include or exclude certain code sections, like:
</p>
<pre class="brush: sql">
$if environment_pkg.production $then
-- Here comes the code to send an email to customers
....
$else
-- Here comes the code to send an email to a dummy internal address
....
$end
</pre>
<p>
To see the effect of conditional compilation, what does the code actually look like, you can use the built in package DBMS_PREPROCESSOR.PRINT_POST_PROCESSED_SOURCE which output the result to DBMS_OUTPUT
</p>
<pre class="brush: sql">
set serveroutput on
begin
dbms_preprocessor.print_post_processed_source (object_type => 'PACKAGE BODY'
,schema_name => user
,object_name => 'MAIL_PKG' -- package where you use the conditional compilation
);
end;
/
</pre>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com1Oosterhout, Netherlands51.6410202 4.861690123.330786363821154 -30.2945599 79.951254036178852 40.0179401tag:blogger.com,1999:blog-1271763227002553835.post-89458274842257604162020-04-24T12:32:00.000+02:002020-04-24T14:55:55.596+02:00First and Last Day of the ISO-year<p>
The customer had the request for a report regarding the sales figures Year to Date, per week. More specifically: ISO-week.<br>
More information about ISO-week, go to Wikipedia: <a link="https://en.wikipedia.org/wiki/ISO_week_date" target="_blank">ISO-week</a><br>
Initially I didn't think too much about it, I know Oracle provides the format mask "IW" to get the ISO-week number, and thought that would do. But there is a little bit more to consider.<br>
More information on format masks in Oracle Database 19c: <a link="https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/Format-Models.html#GUID-DFB23985-2943-4C6A-96DF-DF0F664CED96" target="_blank">Format Masks</a>
</p>
<p>
Formatting the date to reflect the ISO-week is easy enough:
<pre class="brush:sql">
to_char (sysdate, 'iw')
</pre>
which would give you output like '03' or '14'.<br>
Just formatting the dates with this format mask could be sufficient... but there might be a snag to it.<br>
<br>
The customer in my case wanted to see the data per year, so the first selection would be the year. Then a report would be shown with the number of order per ISO-week.<br>
To get all the relevant data from the database I would add a filter like this one:
<pre class="brush: sql">
where order_date between to_date ('01-01-'||:P_YEAR, 'dd-mm-yyyy')
and to_date ('31-12-'||:P_YEAR, 'dd-mm-yyyy')
</pre>
to select all the data for a given calendar year. The benefit of using calendar years is that you know they will always start on January first and end on December 31. Simply concatenate the year that the customer selected and turn that string into a proper DATE.
</p>
<p>
While I was testing the reports for the customer, I noticed that when I selected 2019 as the year, I would also see data for week 01. What's so strange about that? Well, the system didn't go live until August of 2019. There shouldn't be a data for the first week of 2019. Searching though the database backed up my assumption: the earliest orders in the system started in August 2019.
</p>
<p>
And this is the trap I fell into.. the combination of a Calendar year with ISO-weeks.<br>
Let's take a closer look at the end of 2019 and the start of 2020.<br>
<pre class="brush: sql">
SQL> with test as
2 (select date '2019-12-29' as dt from dual
3 union all select date '2019-12-30' as dt from dual
4 union all select date '2019-12-31' as dt from dual
5 union all select date '2020-01-01' as dt from dual
6 union all select date '2020-01-02' as dt from dual
7 union all select date '2020-01-03' as dt from dual
8 union all select date '2020-01-04' as dt from dual
9 union all select date '2020-01-05' as dt from dual
10 )
11 select dt
12 ,to_char (dt, 'iw') as iso_week
13 ,to_char (dt, 'ww') as calendar_week
14 from test
15 /
DT ISO_WEEK CALENDAR_WEEK
-------- ---------- ---------------
29-12-19 52 52
30-12-19 01 52
31-12-19 01 53
01-01-20 01 01
02-01-20 01 01
03-01-20 01 01
04-01-20 01 01
05-01-20 01 01
8 rows selected.
</pre>
Note the difference in the ISO_WEEK and the CALENDAR_WEEK columns. The first ISO_WEEK starts on December 30 2019, while the first CALENDAR_WEEK starts on January 01 2020.<br>
Also note that Calendar week 53 in 2019 is 1 day long.<br>
</p>
<p>
Because I was selecting all order data in the selected year, up to December 31 2019, and then format it using ISO-weeks, it would show 01 in the report... It was just not week 01 in 2019, but in 2020.
</p>
<h2>Getting the First Day of an ISO-year</h2>
<p>
I decided that to get all the data correctly aligned, I would use the first and last day of the ISO-year. The disadvantage is that the first day of an ISO-year is not fixed, it might start in the year before. The same is true for the last day of the ISO-year, it might end in the year after the one you selected.<br>
Using the format masks (in the documentation link at the top of this blog) it is also possible to pass in an ISO-year.<br>
<pre class="brush: sql">
SQL> select trunc (date '2020-01-01', 'iyyy')
2 from dual
3 /
TRUNC(DA
--------
30-12-19
</pre>
This seems to work, but it doesn't ... or at least not for the purposes that I need it.<br>
If I pass in January 01 2020 I get the first day of the ISO-year, which is December 30 2019. That's correct.<br>
When passing in January 01 2021, I get the following result:
<pre class="brush: sql">
SQL> select trunc (date '2021-01-01', 'iyyy')
2 from dual
3 /
TRUNC(DA
--------
30-12-19
</pre>
And this is not what I expected... Turns out that January 01 2021 is still in ISO-year 2020, and the first day of the that ISO-year is December 30 2019.
</p>
<p>
The "trick" is not to pass in a date that might not be in the same year as the ISO-year. Don't choose any date in the last calendar week of December or the first calendar week of January.<br>
Picking a random date in the middle of the year will give you the first day ISO-year:
<pre class="brush:sql">
SQL> select trunc (date '2021-08-05', 'iyyy')
2 from dual
3 /
TRUNC(DA
--------
04-01-21
</pre>
</p>
<h2>Getting the Last Day of an ISO-year</h2>
<p>
Do the same trick as with getting the first day of the ISO-year.<br>
Get the first day of the following ISO-year that you're interested in and subtract on second:
<pre class="brush: sql">
SQL> select trunc (date '2022-08-05', 'iyyy') - interval '1' second
2 from dual
3 /
TRUNC(DA
--------
02-01-22
</pre>
</p>
<p>
And the WHERE-clause with the bind variable from the page:
<pre class="brush: sql">
between trunc (to_date (:P_YEAR||'-08-05', 'yyyy-mm-dd'), 'iyyy')
and trunc (to_date (to_char (to_number (:P_YEAR) + 1)||'-08-05', 'yyyy-mm-dd'), 'iyyy') - interval '1' second
</pre>
</p>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690151.5621932 4.7003286 51.7198472 5.0230516tag:blogger.com,1999:blog-1271763227002553835.post-21807241378164385832020-03-25T14:47:00.001+01:002020-03-26T08:55:32.951+01:00Taming The Queue - when the third party can only handle so many connections<p>
We had a problem with the awesome power that the Oracle database offers... or to be more specific the third party had a problem with that awesome power.<br>
</p>
<h2>Setting the stage</h2>
<p>
Our applications handles many, many orders per day and mostly they are processed in waves during the day. Meaning there are peaks and valleys in the amount of orders that need to be processed. When the orders are processed they are send to a third party application by using a REST Service. Informing the third party doesn't have to be "real time", as long as they get the information.<br>
The way we set up our side of the application is as follows:
<ul>
<li>when the order has a certain status, enough information is placed on a Queue (Advanced Queueing)</li>
<li>A callback function on the Queue will construct the JSON and do the REST Call (using APEX_WEBSERVICE)</li>
</ul>
The advantage of this method is that the operational system can continue working, without having to wait for the REST service to complete. Informing the other system is done in the background.<br>
</p>
<h2>The Problem</h2>
<p>
The advantage of having the database of the Queue is that it "auto-scales". Because it runs in the background it can spin up as many "sessions" as it wants to perform the callback function and clear the queue. If the system gets busy, it will scale down the processing of the Queue so that the operational system doesn't slow down to a crawl. Simply put: Awesome.<br>
However the third party could only handle a limited number of connections, four to be precise. The consequence of this was that when the system would spin up many sessions, a lot of them would get an Error: "Concurrent request limit exceeded. Request blocked." and the orders had to be tried again.
<br>
</p>
<h2>Solutions?</h2>
As we know that the third party could only handle four "sessions" we prefer to reduce the number of sessions that the database would spin up. However there is (as far as I could tell) no way to tell Oracle "you can only use at most four sessions to clear the Queue".<br>
Reaching out on Twitter suggested <a link="https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/AQ_TM_PROCESSES.html#GUID-5C41E4B7-FDE2-426D-8C25-24518974CCD8" target="_blank">adjusting the AQ_TM_PROCESSES parameter</a> but that controls the time, not the number of "sessions".<br>
Another suggestion was to tweak <a link="https://docs.oracle.com/en/database/oracle/oracle-database/18/refrn/JOB_QUEUE_PROCESSES.html#GUID-B8B68D16-00A3-43DD-BE39-01F877880955" target="_blank">JOB_QUEUE_PROCESSES</a>, but we didn't try that to be honest. There are other processes that utilize DBMS_SCHEDULER (e.g. for importing orders) that we didn't want to interfere with this parameter.<br>
Thanks to <a link="https://twitter.com/nicetheoryvidar" target="_blank">Vidar Eidissen</a> and <a link="https://twitter.com/phsalvisberg" target="_blank">Philipp Salvisberg</a> for your responses on Twitter.<br>
<h2>Our Solution</h2>
<p>The method that we implemented was as follows:
<ol>
<li>Get an exclusive lock</li>
<li>Construct the necessary JSON and do the REST Call</li>
<li>Release the lock</li>
</ol>
</p>
<p>
To facilitate this method, I created a table which is just to coordinate the locks:
<pre class="brush:sql">
create table lock_handles
(handle varchar2(10)
,nr number
,constraint lh_pk primary key (handle)
);
insert into lock_handles values ('LOCK1', 0);
insert into lock_handles values ('LOCK2', 0);
insert into lock_handles values ('LOCK3', 0);
insert into lock_handles values ('LOCK4', 0);
commit;
</pre>
The only purpose that this table serves is just for locking, thinking about it a little bit more I doubt that I need more than one column though.</p>
<p>Next a function to get a lock, of course this function is placed in a package (for the example, the name of the package is "MYPKG".
<pre class="brush:sql">
function get_lock
return lock_handles.handle%type
is
e_locked exception;
pragma exception_init (e_locked, -54);
function get_lock (p_handle in lock_handles.handle%type)
return lock_handles.handle%type
is
l_nr lock_handles.nr%type;
begin
select lh.nr
into l_nr
from lock_handles lh
where lh.handle = p_handle
for update nowait;
return p_handle;
end get_lock;
begin
for i in 1..4
loop
begin
return get_lock ('LOCK'||to_char (i));
exception
when e_locked
then
continue;
end;
end loop;
-- wait for a little bit before attempting
-- to get the lock again by recursively
-- calling this function
sys.dbms_session.sleep (10);
return mypkg.get_lock;
end get_lock;
</pre>
When calling this function, at first it will attempt to lock the row in the table with handle "LOCK1". Instead of waiting (FOR UPDATE NOWAIT) when that row is locked, an exception is raised. With the Pragma Exception Init, the exception is named E_LOCKED and thus can be handled by name.<br>
When the exception is raised, another session has a lock on that row, it will attempt to lock the row in the table with handle "LOCK2", and so on.<br>
If no locks can be obtained, tried and exception is raised each time, the function will wait for a little bit (10 seconds) and than call the function recursively to start all over again.<br>
Let's say that a certain function call obtains a lock on the row, the calling program can continue and do the REST call.<br>
When the REST Call is completed, the lock on the table is released by ending the transaction by issuing a commit.<br>
</p>
<h2>Does it work?</h2>
<p>
Until sofar the theory, but does it work? Which is of course a valid question and I was shocked by my first tests. I tried processing a mere 15 orders and to my surprise I discovered that only the first lockhandle, LOCK1, was used.<br>
It dawned that the queue could easily be handled by a single session, so the callback would just serialize the processing.<br>
Increasing the load on the Queue did indeed show that all lockhandles were used. Also nice to see that at after a certain time, the usage of the lockhandle also decreases. It scales to whatever Oracle deems a good method of processing the Queue.
</p>
<p>
Could it be handled differently? Yes, probably and I'm open for suggestions.<br>
Could the code be improved? Yes, probably and I'm open for suggestions.
</p>
<h2>Doh!</h2>
Just as I finishing up this blogpost, <a link="https://twitter.com/connor_mc_d" target="_blank">Connor McDonald</a> chimed in on Twitter.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbxgA0ZEOkKnpYlr7r_cc6vpaBAsF-iw2QGn0EcgcWcJ6IDGNZr0patF52q9kPxmvqZtK-jfGaiRRloTQrgV_4ftrz0rloLxMeOZf0l74wT8wz6W87z6amLleWaZ3ykYOxnZq3Rs88Rx08/s1600/Notifications___Twitter.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjbxgA0ZEOkKnpYlr7r_cc6vpaBAsF-iw2QGn0EcgcWcJ6IDGNZr0patF52q9kPxmvqZtK-jfGaiRRloTQrgV_4ftrz0rloLxMeOZf0l74wT8wz6W87z6amLleWaZ3ykYOxnZq3Rs88Rx08/s320/Notifications___Twitter.png" width="320" height="64" data-original-width="1062" data-original-height="214" /></a></div>
Well, yes Connor, yes, it certainly does...<br>
The rewritten method is as follows
<pre class="brush:sql">
function get_lock
return varchar2
is
l_lock integer;
begin
loop
for i in 1..4 -- max number of concurrent sessions
loop
l_lock := dbms_lock.request (id => to_number ('223344'||to_char (i)) -- arbitrary locknr
,lockmode => dbms_lock.x_mode -- exclusive mode
,timeout => 0 -- don't wait to obtain a lock
,release_on_commit => true -- release when transaction ends
);
if l_lock = 1 -- Timeout
then
continue; -- try the next lock
else
return 'LOCK'||to_char (i);
end if;
end loop;
-- A variation to the suggestion that
-- Connor made in the comments.
-- I've added an extra pause here
-- before attempting to get another lock
sys.dbms_session.sleep (10);
end loop;
end get_lock;
</pre>
No need for the extra table, and the code is a lot shorter.Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com1Oosterhout, Netherlands51.6410202 4.861690151.5621932 4.7003286 51.7198472 5.0230516tag:blogger.com,1999:blog-1271763227002553835.post-67558719353612658002019-11-26T14:32:00.001+01:002019-11-26T14:32:09.844+01:00Human readable JSON, stored in BLOB<p>
Currently Oracle (we're using Oracle 18c at the moment) is still recommending to store your JSON document in a BLOB column. More efficient, no character conversions and probably some more advantages.
</p>
<p>
When there is a need to look at the JSON document, having it as a BLOB is not very useful.
</p>
<p>
An easy way to make the JSON Blob readable is to use the following query:
<pre class="brush: sql">
select json_query (i.json_payload, '$' returning clob pretty)
from imports i
</pre>
Omitting "Pretty" will output the JSON object without formatting. Including Pretty makes it look... well... pretty.
</p>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Nederland51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-38382190637766871482019-10-10T15:21:00.001+02:002019-10-10T15:21:41.107+02:00OGB Appreciation Day: APEX_DATA_PARSER: Flexible, Powerful, Awesome (#ThanksOGB #OrclAPEX)<p>
For this years OGB Appreciation Day I wanted to highlight the very awesome APEX_DATA_PARSER.
</p>
<p>
At my current project one of the requirements is that CSV-files are to be uploaded by the user and the application has to figure out what type of file it is and process accordingly.<br>
So the challenge with this is:
<ul>
<li>Which type of file is it?</li>
<li>What delimiter was used?</li>
</ul>
</p>
<p>
Uploading the file is straight forward enough with Application Express (APEX) and out of scope of this blog. The files are stored in a APEX Collection before they are processed.
</p>
<p>
The original procedure to determine what type of file it was and how to process it was quite cumbersome.<br>
The file is parsed and the first line was extracted. The string which effectively was the header line of the CSV file was compared with a predefined string and based on this comparison it was determined what type of file it was.<br>
The problem was that sometimes the header line would look like this:
<pre class="brush:sql">
DATE_,PAYMENT_METHOD,CURRENCY,AMOUNT,STATUS,ID,DESCRIPTION
</pre>
and sometimes it would like this:
<pre class="brush:sql">
"DATE_","PAYMENT_METHOD","CURRENCY","AMOUNT","STATUS","ID","DESCRIPTION"
</pre>
or maybe even sometime like this:
<pre class="brush:sql">
"DATE_";"PAYMENT_METHOD";"CURRENCY";"AMOUNT";"STATUS";"ID";"DESCRIPTION"
</pre>
The header line could be enclosed in quotes, or not.. The delimiter was a comma, or not...<br>
I've simplified the example, the real header lines where a lot longer.<br>
You can imagine that parsing the rest of the file was quite tricky, as the file content could have commas as delimiter or semi-colons or ... <br>
Because of all the variations the procedure was quite lengthy and error prone.
</p>
<h2>Simplifying with APEX_DATA_PARSER</h2>
<p>
With APEX_DATA_PARSER things got a lot cleaner, not to mention a lot faster. The example below will only give you the gist of the procedure.
</p>
<p>
First the different header lines where declared as a record type of datatype APEX_T_VARCHAR2.
<pre class="brush:sql">
l_file1_header apex_t_varchar2 := apex_t_varchar2 ('DATE_'
,'PAYMENT_METHOD'
,'CURRENCY'
,'AMOUNT'
,'STATUS'
,'DESCRIPTION'
);
l_file2_headers apex_t_varchar2 := apex_t_varchar2 ('ID'
,'REF'
,'ORDER_'
,'STATUS'
,'LIB'
);
</pre>
Second we need to get the header line of the file that was uploaded through the APEX application.
<pre class="brush: sql">
select column_name
bulk collect
into l_file_header
from table (apex_data_parser.get_columns
(apex_data_parser.discover (p_content => r_files.file_content
,p_file_name => r_files.filename
)));
</pre>
The APEX_DATA_PARSER offers a function to discover information about the file. The function return a file-profile and contains information about the file-encoding, file-type, how many rows it parsed, what type of delimited is used in case of CSV, etc. etc.. This function is aptly named DISCOVER.<br>
With the APEX_DATA_PARSER.GET_COLUMNS function uses the file profile created by the DISCOVER-function to obtain information about the content: Which columns are in the file, what might the datatype be, what is the format mask that was used, in which order are the columns. <br>
With the simple SELECT statement above we know the header columns in the uploaded file.<br>
<br>
We're almost there..<br>
<br>
Now that the header columns are known, we can determine how to proceed from here with a straightforward CASE statement.<br>
Why a CASE statement you might ask. It gives a easy opportunity to raise an exception in case an unknown file is uploaded... i.e. it will raise a CASE_NOT_FOUND exception.<br>
<pre class="brush: sql">
case l_file_header
when l_file1_header
then
process_file1;
when l_file2_header
then
process_file2;
end case;
</pre>
The uploaded files are moved from the APEX collection to a regular heap table called ALL_UPLOADED_FILES (omitted from this blogpost).<br>
<br>
To process the content of the file we can now do an INSERT-SELECT and drop the content into the destination table:
<pre class="brush: sql">
insert into destination_tbl
(date_
,payment_method
,currency
,amount
,status
,description
)
select to_date (col001, 'yyyy-mm-dd hh24:mi:ss')
,col002
,col003
,col005
,col006
,col007
from all_uploaded_files fle
,table (apex_data_parser.parse (p_content => fle.csv
,p_file_name => fle.csv_filename
,p_skip_rows => 1 -- skip the header line
))
</pre>
<br>
On a sidenote: we're using DropZone, so the file(s) are in an APEX collection which is read with the following query:
<pre class="brush: sql">
select c001 as filename
,c002 as mime_type
,d001 as date_created
,n001 as file_id
,blob001 as file_content
from apex_collections
where collection_name = 'DROPZONE_UPLOAD'
</pre>
</p>
A big thank you to the APEX team for implementing this awesome functionality! Thanks ODC!Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-62377227833413266222019-03-12T12:02:00.001+01:002019-03-12T12:02:04.058+01:00The elegant NUMTODSINTERVAL to transform numbers<p>
There is a database table with three columns into which data from an external source is loaded.<br>
The data is loaded as a CSV file and contains three columns to represent the date, hour and minutes. The CSV file is loaded into a staging table with three columns:
<ul>
<li>one for the date, datatype: DATE</li>
<li>one for the hours, datatype NUMBER</li>
<li>one for the minutes, datatype NUMBER</li>
</ul>
<br>
Loading the information in the destination table, there is only one column to hold the data. These three columns should be transformed into a "real" date, meaning a DATE datatype as we know it.
</p>
<p>
One way is to concatenate all the columns together and apply the TO_DATE function with the appropriate format mask. With this method you need to take care of leading zeros for the hours and minutes.<br>
This method would result in something like the following:
<pre class="brush: sql">
to_date( to_char( e.csv_date, 'DD-MM-YYYY') || ' ' || to_char( lpad( e.hour, 2, '0')) || ':' || to_char( lpad( e.minute, 2, '0')), 'DD-MM-YYYY HH24:MI') as clock_time
</pre>
</p>
<p>
As you can imagine, it was only because of exceptions before realizing that the hours and minutes should be left padded with zeros.<br>
There is another, easier, way to transform the three columns into a single DATE column:
<pre class="brush: sql">
e.csv_date + numtodsinterval (e.hour, 'HOUR') + numtodsinterval (e.minute, 'MINUTE') as clock_time
</pre>
Using the NUMTODSINTERVAL function for the hours and the minutes makes the transformation trivial. No need to worry about leading zeros (or lack thereof).
</p>
NUMTODSINTERVAL is a great little function that makes code so much more elegant.
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0tag:blogger.com,1999:blog-1271763227002553835.post-39660042060276629372018-10-11T10:03:00.000+02:002018-10-11T10:03:00.345+02:00ODC Appreciation Day - Comparing JSON, they thought of everything...<p>
Oracle 12c introduced JSON in the database and one of the most recent additions is a function to compare JSON, JSON_EQUAL.<br>
</p>
<p>
From the documentation:
<blockquote>
Oracle recommends that you use BLOB storage.
</blockquote>
When I first learned about this recommendation it surprised me. Looking at a BLOB doesn't even remotely look like JSON, so how does JSON_EQUAL deal with a comparison between a BLOB and a string?<br>
I mean how does this:
<pre class="brush:sql">7B2274686973223A2274686174227D</pre>
compare to this:
<pre class="brush:sql">{"testing":"information"}</pre>
Let's try and find out.
</p>
<p>
<pre class="brush: sql">
create table t
(json_data blob)
/
insert into t values (
utl_raw.cast_to_raw (
'{"testing":"information"}'
)
);
</pre>
<pre class="brush:sql">
select case
when json_equal (json_data, '{"testing":"information"}')
then 'same' else 'different'
end as result
from t
/
RESULT
---------
same
</pre>
<pre class="brush:sql">
select case
when json_equal (json_data, '{"testing":"different"}')
then 'same' else 'different'
end as result
from t
/
RESULT
----------
different
</pre>
How cool is that?
</p>
<p>
Later I learned from <a href="https://twitter.com/chrisrsaxon" target="_blank">Chris Saxon</a> about the reason behind the BLOB recommendation: it avoids character set conversions!<br>
So, there are no excuses for not using BLOB as your default choice when storing JSON documents.
</p>
A big thank you to the team that implemented JSON in the database and for thinking of everything!
Thanks ODC!
<h2>Links</h2>
<ul>
<li><a href="https://livesql.oracle.com/apex/livesql/file/content_HDKLKY0WRW0ODC4QS0D9X9FV3.html" target="_blank">Demo script on LiveSQL.Oracle.com</a></li>
<li><a href="https://docs.oracle.com/database/121/ADXDB/json.htm#ADXDB6251" target="_blank">Oracle documentation on JSON</a></li>
<li><a href="https://docs.oracle.com/en/database/oracle/oracle-database/18/sqlrf/SQL-JSON-Conditions.html#GUID-08C75404-6E58-4EBE-A8B4-0B6041B0DB63" target="_blank">Oracle documentation on JSON_EQUAL</a></li>
<li><a href="http://nuijten.blogspot.com/2018/09/json-blob-column-and-check-constraint.html" target="_blank">My blog on JSON Check constraint with a BLOB column</a></li>
</ul>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com1tag:blogger.com,1999:blog-1271763227002553835.post-10809526531562014822018-09-11T10:40:00.001+02:002018-09-11T10:40:54.593+02:00JSON, BLOB column and Check Constraint<p>
Last week I attended <a href="https://chandlerdba.com/" target="_blank">Neil Chandler's</a> session on JSON during the POUG conference in beautiful Sopot, Poland.<br>
The <a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/adjsn/overview-of-storage-and-management-of-JSON-data.html#GUID-26AB85D2-3277-451B-BFAA-9DD45355FCC7" target="_blank">
JSON Developer's Guide</a> recommends using BLOB for storing data, and this is what Neil also recommended.<br>
I was under the (FALSE!) impression that it was not possible to put an IS JSON check constraint on a BLOB column, simply because I tried once and got an exception.
After the session I asked Neil if this was the case, but (of course) he said that it was possible.<br>
"Trust, but verify", so here is my test case.
</p>
<p>
My tests are done on a 12.1.0.2.0 release of the Oracle database, as can be seen in the output below.
<pre class="brush: sql">
SQL> select banner
2 from v$version
3 /
BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
TNS for Solaris: Version 12.1.0.2.0 - Production
NLSRTL Version 12.1.0.2.0 - Production
</pre>
</p>
<p>
First of all create a table with a BLOB column to hold the JSON data.
<pre class="brush: sql">
SQL> create table t
2 (json_data blob)
3 /
Table T created.
</pre>
</p>
<p>
The most easiest way of placing an IS JSON check constraint on the column is to use the following statement:
<pre class="brush: sql">
SQL> alter table t add constraint json_chk check (json_data is json)
2 /
Error starting at line : 1 in command -
alter table t add constraint json_chk check (json_data is json)
Error report -
ORA-40499: no format defined for binary data type
</pre>
Because it failed, I was under the assumption that it is not possible to place an IS JSON check constraint on the column.<br>
Turns out that I was not complete in my check constraint definition (guess I should have read the exception and documentation more closely).
</p>
<p>
When you place the check constraint in the following manner on the column, the IS JSON constraint is added.
<pre class="brush: sql">
SQL> alter table t add constraint json_chk check (json_data is json format json)
2 /
Table T altered.
</pre>
In later versions of the database (I've also tried on 12.2 and 18c) it is possible to use the former syntax, there is no need
to use "FORMAT JSON" in the check constraint definition.
</p>
<p>
Let's see the BLOB column with the check constraint in action.<br>
First attempt to insert a valid JSON-string into the table:
<pre class="brush: sql">
SQL> insert into t values ('{"this":"that"}' )
2 /
Error starting at line : 1 in command -
insert into t values ('{"this":"that"}' )
Error report -
ORA-01465: Ongeldig hexadecimaal getal.
</pre>
Even though the string is valid JSON, it is not a binary representation. For this to work UTL_RAW is required:
<pre class="brush: sql">
SQL> insert into t values (utl_raw.cast_to_raw('{"this":"that"}') )
2 /
1 row inserted.
</pre>
Trying to create an entry in the table which is not JSON will result in a violation of the JSON check constraint.
<pre class="brush: sql">
SQL> insert into t values (utl_raw.cast_to_raw('just a string') )
2 /
Error starting at line : 1 in command -
insert into t values (utl_raw.cast_to_raw('just a string') )
Error report -
ORA-02290: CHECK-beperking (ALEX.JSON_CHK) is geschonden.
</pre>
When the value that you're trying to insert is not a binary value, it will always fail with "ORA-01465: invalid hex number":
<pre class="brush: sql">
SQL> insert into t values ('just a string' )
2 /
Error starting at line : 1 in command -
insert into t values ('just a string' )
Error report -
ORA-01465: Ongeldig hexadecimaal getal.
</pre>
<p>
One of the benefits of placing an IS JSON check constraint on the column is that you can use a "dot-notation" to query out the data:
<pre class="brush: sql">
SQL> select tst.json_data.this
2 from t tst
3 /
THIS
--------------------------------------------------------------------------------
that
</pre>
Keep in mind that you must use a table alias for this to work. Omitting the table alias will result in an error.
<pre class="brush: sql">
SQL> select json_data.this
2 from t tst
3 /
Error starting at line : 1 in command -
select json_data.this
from t tst
Error at Command Line : 1 Column : 8
Error report -
SQL Error: ORA-00904: "JSON_DATA"."THIS": ongeldige ID
</pre>
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com2Oosterhout, Netherlands51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-17493326510774277012018-01-09T14:12:00.000+01:002018-01-09T14:12:14.288+01:00JSON_TABLE and the Top 2000<p>
Oracle database 12c provides native JSON parsing. It is relatively easy to go from data in a JSON format to a relational representation.<br>
In this example I will use data from the Top 2000. Every year in The Netherlands a playlist is produced by popular vote and fully broadcast between Christmas and the last day of the year, with the number one being played just before the New Year. You can see the complete list of 2017 <a href="https://www.nporadio2.nl/top2000" target="_blank">on the official NPO website.</a>
<br>
The Top 2000 list that I'll use for this example is the one from 2017.
</p>
<p>
The data from the JSON file looks like this:
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO04dERMUTT7cS0jpWleNcCa5U6Gl-Nra7Dys7ZCPx0oosdUvykO4NMwpr6SeJtjwFU_pnZo37H7ccj1wzZn5loCo5H6sgmOABdJkKOuoOQWO0939SnkJIVPcW9uZBNR8U4uTsT6iK9LI8/s1600/json-formatted.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO04dERMUTT7cS0jpWleNcCa5U6Gl-Nra7Dys7ZCPx0oosdUvykO4NMwpr6SeJtjwFU_pnZo37H7ccj1wzZn5loCo5H6sgmOABdJkKOuoOQWO0939SnkJIVPcW9uZBNR8U4uTsT6iK9LI8/s320/json-formatted.PNG" width="175" height="320" data-original-width="394" data-original-height="722" /></a></div>
Only the first part of the JSON file is shown, with the first two songs.
<br>
At the highest level there are three name-value pairs ("success", "message" and "messages") and an array named "data".<br>
The "data" array contains another array with JSON objects containing information about the songs.<br>
Each of these JSON objects contain name-value pairs, with very short none descriptive names, like "s" and "a". You might guess what these names would mean.<br>
Even though the "data" attribute is a nested array, at the top level it is only one level deep.<br>
The complete JSON-file can be downloaded by following <a href="https://www.nporadio2.nl/?option=com_ajax&plugin=Top2000&format=json&year=2017" target="_blank"> this link.</a>
</p>
<p>
At first I want to get the JSON file into the database, before I extract the values that I need.
<br>
First create the table and a check constraint to make sure that is JSON.
<pre class="brush: sql">
create table temp2000
(complete_list clob);
alter table temp2000
add constraint list_is_json check (complete_list is json)
/
</pre>
This table will hold the complete JSON file containing the Top2000. The check constraint on the column will verify that only correctly formatted JSON is allowed.<br>
Now add the JSON-file to the table, the easiest way to do this is by adding a row using SQL Developer and copy-paste the complete JSON.
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjb8uNVp4rYXrDOgj94F6fj9_e97nq3aaW_-96OnBqluT3MiVcJaqPlxUBbHDZyE_MJ-cwweK1pUOC1nZBoygt9X9PrfzAW-W6CfWIcSgglok6lcmgMVkRZxtK3r9noVuaBC8-78MEzTxiS/s1600/save_temp2000.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjb8uNVp4rYXrDOgj94F6fj9_e97nq3aaW_-96OnBqluT3MiVcJaqPlxUBbHDZyE_MJ-cwweK1pUOC1nZBoygt9X9PrfzAW-W6CfWIcSgglok6lcmgMVkRZxtK3r9noVuaBC8-78MEzTxiS/s320/save_temp2000.PNG" width="320" height="281" data-original-width="896" data-original-height="786" /></a></div>
</p>
<p>
What I want to extract from the complete JSON file is the information about the artist, songtitle, release year, current position in the list, as well as the position in the list from last year.
<br>
My guess would be that the "a", "s", "yr", "pos", "prv" are the attributes that I need.
<br>
With a CTAS (Create Table as SELECT) and the JSON_TABLE operator I can transform the JSON to relational representation.
<pre class="">
create table top2000
as
select songtitle
,artist
,release_year
,curr_position
,prev_position
from temp2000 t
,json_table (t.complete_list format json, '$.data[0][*]'
columns (
songtitle varchar2(150) path '$.s'
,artist varchar2(150) path '$.a'
,release_year number path '$.yr'
,curr_position number path '$.pos'
,prev_position number path '$.prv'
)
)
/
</pre>
Because the song information is contained in the "data" array, and only in the nested array, I need to address that array as follows:
<pre class="brush: sql">
'$.data[0][*]'
</pre>
Zero (in square brackets) representing the first array in the "data" attribute, and a wildcard (the asterisk) to address the other objects in the nested array.
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5FfghTmWPgnAO3ShEPOITfuMu6j4rHlQXsVnwCsdsqIReICv-g0dqiW7_9XEmbegNFd7nnYi8roXk_i30JrvPPpDRG-7UcktoDjQS8HaLROmflNs30Y6IiSkCEhxqLIEt3GSKt9ni3bwh/s1600/relational.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5FfghTmWPgnAO3ShEPOITfuMu6j4rHlQXsVnwCsdsqIReICv-g0dqiW7_9XEmbegNFd7nnYi8roXk_i30JrvPPpDRG-7UcktoDjQS8HaLROmflNs30Y6IiSkCEhxqLIEt3GSKt9ni3bwh/s320/relational.PNG" width="320" height="202" data-original-width="739" data-original-height="466" /></a></div>
<p>
To help with the discovery of the structure (and the paths to the values) of the JSON, Oracle Database 12c Release 2 introduced <a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/JSON_DATAGUIDE.html" target="_blank">the JSON_DATAGUIDE function</a>. Unfortunatelly I don't have Release 2 at my disposal right now, so I leave that for a later time.
<br>
</p>
<h2>Links</h2>
<ul>
<li>
<a href="https://www.nporadio2.nl/top2000" target="_blank">NPO Top2000 2017</a>
</li>
<li>
<a href="https://www.nporadio2.nl/?option=com_ajax&plugin=Top2000&format=json&year=2017" target="_blank">Top 2000 JSON from 2017</a>
</li>
<li>
<a href="https://docs.oracle.com/en/database/oracle/oracle-database/12.2/sqlrf/JSON_DATAGUIDE.html" target="_blank">Oracle documentation: JSON_DATAGUIDE</a>
</li>
</ul>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Nederland51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-55336676804359171502017-10-10T21:27:00.001+02:002017-10-11T17:53:42.123+02:00ODC Appreciation Day: EMP #ThanksODC<p>
Here is my very short entry for <a href="https://oracle-base.com/blog/2017/09/25/odc-appreciation-day-2017-thanksodc/" target="_blank">the Oracle Developer Community Appreciation Day 2017.</a>
</p>
<p>
Very often I use the good ol' EMP table to demonstrate even the latest features of SQL and PL/SQL. <br>
Everybody seems to know the EMP table, and some even know some of the content off the top of their head (yes, I'm guilty of that too).
Whenever I need to write a hierarchical query and am not really sure what which column goes on which side in the CONNECT BY clause,
I will use EMP to test and verify my assumptions. Something I did just this afternoon.<br>
</p>
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh2DYKOxCla9YF9DVXVIW_CAFnpnaehZYL-2eVgUF7j4zKYT2Xb_alCzF8mAK9N3Qx_WBUoDm0UV3zG-vNhgmytOjzcxtjjJ2nlN29ZaosERY2cYJqoDwPwMEJmE_kdyC3xsUjI-lOuB5t/s1600/Pasted_Image_10_10_2017__21_15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh2DYKOxCla9YF9DVXVIW_CAFnpnaehZYL-2eVgUF7j4zKYT2Xb_alCzF8mAK9N3Qx_WBUoDm0UV3zG-vNhgmytOjzcxtjjJ2nlN29ZaosERY2cYJqoDwPwMEJmE_kdyC3xsUjI-lOuB5t/s320/Pasted_Image_10_10_2017__21_15.png" width="320" height="218" data-original-width="375" data-original-height="256" /></a></div>
<p>
I found this old screenshot showing that the EMP table was around in Oracle database version 4.1.1.. history doesn't tell whatever happened to Carter though...
</p>
<h2>Update</h2>
<p>
Apparently I'm not the only one who's a fan of the EMP, there is even a store and a museum...
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtQB34S3sQVpttuDOm-SWVdZfjg26kwHMDXJf22S9nHmZzQFCQ5wJmHFxpdS7OjvdjYe-NJfJBEb2WOKNvuENn9_vQOdQB028X7G7bGwtW98pRf8_XNuungVyCfESSHTk-aWdksah7DLRb/s1600/IMG_0733.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtQB34S3sQVpttuDOm-SWVdZfjg26kwHMDXJf22S9nHmZzQFCQ5wJmHFxpdS7OjvdjYe-NJfJBEb2WOKNvuENn9_vQOdQB028X7G7bGwtW98pRf8_XNuungVyCfESSHTk-aWdksah7DLRb/s320/IMG_0733.JPG" width="320" height="240" data-original-width="1600" data-original-height="1200" /></a></div><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9iuzxn4kCA4GJ4HQoff9YLvMAyqavvJ5iRLCPSllJJUBxfwKVRXbCmqbwrkfV0pwx5eXeqnbOp_xprhz2D9CDGH6WhOFLelS-7FU8FlXEN73ie_lFmeHm6v6XubWxlzft0MZ5w9e5dt09/s1600/IMG_2990.JPG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9iuzxn4kCA4GJ4HQoff9YLvMAyqavvJ5iRLCPSllJJUBxfwKVRXbCmqbwrkfV0pwx5eXeqnbOp_xprhz2D9CDGH6WhOFLelS-7FU8FlXEN73ie_lFmeHm6v6XubWxlzft0MZ5w9e5dt09/s320/IMG_2990.JPG" width="240" height="320" data-original-width="1200" data-original-height="1600" /></a></div>
</p>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com0Oosterhout, Netherlands51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-3262700794319654792017-06-01T14:33:00.003+02:002017-06-01T14:33:55.860+02:00Wanted New Speakers<p>
Yesterday there was an email coming from the UK Oracle User Group (UKOUG) about their <a href="http://tech17.ukoug.org/" target="_blank">upcoming Tech conference</a> at the end of the year.<br>
There was a call to action to get more new speakers, but with a twist. This email wasn't directed at everybody, but specifically to people who have presented before at a UKOUG event.<br>
Why would you sent an email to look for new speaker and only address the people that "have done it before"?<br>
The call to action was basically: "reach out to new speaker by offering to do a joint presentation".<br>
</p>
<h1>Why and How?</h1>
<p>
Now where did that idea come from? <br>
A while ago I read <a href="http://apexconnect2017.blogspot.nl/2017/05/apex-connect-2017.html" target="_blank">a blog about the APEX Connect 2017 conference</a>, it was posted anonymously and in German (Thank you Google Translate). Some parts of the blog were very positive, but there was also a critical note about the number of new speakers at the conference.<br>
How the agenda was put together for the APEX Connect 2017 conference is up to the conference committee, there is only so much you can do about that - like providing feedback.<br>
But that blog did get me thinking... I know that a lot of user groups really want to have new speakers, I also know that there are a lot of people out there that have a story to tell.<br>
When I talk to people, at conferences, on the work-floor, anywhere and ask them about doing a presentation a lot of the same answers can be heard: "What I do is very simple" "I wouldn't know how to write an abstract" "I've got nothing to say" "so and so is an expert in that field, I've got nothing to add to it" "Nobody will show up" etc. etc. etc.<br>
After a brainstorming session with Chris Saxon (while lunching at the RigaDevDays conference) and Jan Karremans (while waiting at the airport) I came up with the idea of a joint-presentation. Get a new speaker hooked up with a more experienced speaker and let the magic happen.<br>
What will you get out of it as a first time speaker;
<ul>
<li>You don't have to talk the whole slot, you only have to do half of the talk...</li>
<li>Did you get a question that you don't know the answer to? There is someone else on stage that might know the answer.</li>
<li>Get pointers for writing an abstract</li>
<li>Get pointers for putting together a slide deck</li>
<li>Get pointers for telling the story</li>
<li>...</li>
</ul>
The more experienced speaker can help with the above, and perhaps more.<br>
</p>
<h1>Act now</h1>
Doing a joint presentation is a lot of work, it takes serious effort to put it all together. When you've done a presentation before you know this, and because of this I was pleasantly surprised to see so many speakers reaching out.<br>
Alright, so if you've ever considered doing a (joint) presentation about SQL, PL/SQL and/or APEX - get in touch.<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgFh5qp430BKMCYMLJFRZp_5Y6vGo2bNtu5o1jaELLfSYzlfis272NR6K2FuSXFWVq5CrHCSrBzoHH2CCsNzbmugau7wfZnXUk3IJ6XIamO01wyEPFTaeN-zbj9bDfLSGHF9y-E5pXQi_M/s1600/TweetDeck.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgFh5qp430BKMCYMLJFRZp_5Y6vGo2bNtu5o1jaELLfSYzlfis272NR6K2FuSXFWVq5CrHCSrBzoHH2CCsNzbmugau7wfZnXUk3IJ6XIamO01wyEPFTaeN-zbj9bDfLSGHF9y-E5pXQi_M/s320/TweetDeck.png" width="320" height="200" data-original-width="598" data-original-height="374" /></a></div>
Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com2Oosterhout, Netherlands51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325tag:blogger.com,1999:blog-1271763227002553835.post-50380873986619515512017-03-28T10:00:00.000+02:002017-03-28T10:00:08.255+02:00Good old BIN_TO_NUM to check the overall status<p>
A good while ago <a href="https://twitter.com/chrisrsaxon" target="_blank">Chris Saxon, member of the AskTom answer team</a>, asked on twitter which datatype you use when defining tables when you need a Boolean-representation. As you might know there is no Boolean datatype in SQL.<br>
A lot of discussion followed which I'm not going to repeat.<br>
Usually I use a VARCHAR2(1) with a check constraint for Y and N, but for a recent requirement I decided to use a NUMBER instead.
</p>
<p>
The requirement that I needed to implement was the following:
<blockquote>
A number of tasks need to be fulfilled, the order doesn't matter, and they need to be checked by a supervisor. Before the supervisor gives his/her stamp of approval, the tasks all need to be completed.
</blockquote>
For this example I will leave the authorization out, it is irrelevant.
<pre class="brush: sql">
create table tasks
(task_date date not null
,task_a number not null check (task_a in (0,1))
,task_b number not null check (task_b in (0,1))
,task_c number not null check (task_c in (0,1))
,tasks_checked number not null check (tasks_checked in (0,1))
)
/
</pre>
The table contains a date for which the tasks need to be completed, indicators for the individual tasks (A,B, and C) and the Tasks Checked indicator.<br>
All indicators are NOT NULL, datatype NUMBER and only allowed to contain a zero or one. An zero indicates FALSE, a one indicates TRUE.<br>
</p>
<p>
The requirement states that the verification of the tasks should take place before the tasks are done. There are several ways to implement this requirement, and this time I'm going to use the combination of zeros and ones to create a binary representation.<br>
I will add to overall status as a virtual column to the table using the built-in function BIN_TO_NUM.
<pre class="brush: sql">
alter table tasks
add (bin_status as (bin_to_num (task_a, task_b, task_c, tasks_checked)))
/
</pre>
</p>
<p>
Now the (not so) magic part: <br>
When the binary status equals 14, all tasks are completed and can be checked by a supervisor.<br>
When the binary status is an odd number, the checking of the tasks is done before the tasks are completed and this is not allowed. The only exception is when the binary status equals fifteen (15), then all tasks are done and it is checked by a supervisor.<br>
When the binary status an even number, all is good.<br>
All this can easily be captured in a CHECK constraint:
<pre class="brush: sql">
alter table tasks
add constraint chk check (
case when bin_status = 15 then 1
when mod (bin_status, 2) = 0 then 1
else 0 end = 1
)
/
</pre>
</p>
<p>
Finally two inserts to show that it works as expected.
<pre class="brush: sql">
insert into tasks
(task_date
,task_a
,task_b
,task_c
,tasks_checked
)
values
(sysdate
,0
,0
,1
,0
14 );
1 row created.
insert into tasks
(task_date
,task_a
,task_b
,task_c
,tasks_checked
)
values
(sysdate
,1
,0
,1
,1
);
insert into tasks
*
ERROR at line 1:
ORA-02290: check constraint (ALEX.CHK) violated
</pre>
</p>
<p>
For the first time I've used LiveSQL to create the scripts that go along this blogpost, you can find that <a href="https://livesql.oracle.com/apex/livesql/file/content_EQSK8T706HF9KHLKWUIYH2CQI.html" target="_blank">right here.</a>
</p>
<h2>Links</h2>
<ul>
<li>
<a href="http://docs.oracle.com/cd/B28359_01/server.111/b28286/functions013.htm" target="_blank">Oracle documentation: BIN_TO_NUM</a>
</li>
<li>
<a href="https://livesql.oracle.com/apex/livesql/file/content_EQSK8T706HF9KHLKWUIYH2CQI.html" target="_blank">LiveSQL demo</a>
</li>
</ul>Alex Nuijtenhttp://www.blogger.com/profile/06345615264010120428noreply@blogger.com2Oosterhout, Nederland51.6410202 4.861690100000032551.5621932 4.7003286000000326 51.7198472 5.0230516000000325