Oracle ORDS CRUD Generator

Currently I’m writing a blogpost about provisioning the Autonomous Transaction Processing (ATP) database and using it from Process Cloud Service (PCS) and Visual Builder Cloud Service (VBCS). Right now however I would like to share something that can benefit everyone working with ATP and Oracle Rest Data Services (ORDS), being: the ORDS CRUD Generator.

Link: you can find my JSFiddle here, if you want the tool in VBCS format, please ask me in the comments!
Remark: this tool is still under construction, so not all SQL inputs might work (yet). Please do add bugs in the comments and I’ll make sure the JSFiddle gets fixed. Update: it works now for both APEX SQL Workshop and SQLDeveloper copy paste

This little tool that I wrote will write SQL code that makes generating services easier then ever before!! But why do that? Let me explain:
We are using one schema for multiple domains, why? Because we want to have for example one address table, use and be able to query on it in a generic way. We do however want to have specific (address) services per domain. So let’s say we have two domains, marketing and orders, we are going to create two seperate address service CRUD templates, /marketing/address and /order/address. Why not auto-rest on the address table? Mainly for two reasons, the first being extensibility, if we ever get changes we can quickly & easily add PLSQL / SQL to our REST handlers. Secondly this is because we want to be able to have applications in each domain to be able to release without influence the others. In the GET handler, we don’t do GET * but select all the fields we currently have (like select street, city, etc). By doing this we decouple the fields between the ORDS service layer and the database itself. If one of the domain teams changes a table by adding a field, which will be 90% of the database changes, the other team is not even influenced AT ALL (say we add a IS_APARTMENT field to the address table in team marketing, we do not influence the orders team). If we rename or remove database fields, which will not happen often, we have two options:

  • Decide on a common release window (always when removing a field)
  • When renaming a team has to talk to the other teams to check if it impacts the services. The team then renames all the fields in the ORDS services. This way it will not impact any applications that use the services.

So what does the tool do for us? It makes the life easier by automatically create all ORDS CRUD services, including even a procedure so we can do a PATCH as well! For the patch we use optional fields in the PUT http method, which means we can optionally update a few fields of a complete table instead of always having to update the complete table. I use APEX on ATP myself to do the SQL, the tool has several fields (VBCS view, JSFiddle is slightly different):

EDIT: Tool will now give three seperate result textarea’s, one for normal output when the module already exists, one for the patch procedure and one for when the module does not exist yet
  • DB Schema: the schema of the DB you want the services in
  • Systemfields: the fields which should NOT be exposed to the world, in our teams we have triggers for last updated by, last updated on, created by, etc.
  • REST Module: the name of the REST module you want to add these templates to
  • REST Path: the path you want in the URI, for example /marketing/
  • ORDS Put procedure: the name of the procedure for the optional PUT (PATCH). The name is automatically filled, can be changed though
  • Handler prefix: adds text to each URI of specific handlers, for example “jarvispizza-” makes the URI for the handler URI “jarvispizza-address” and the template URI “/marketing/jarvispizza-address”
  • Tablename: automatically filled, database tablename in ATP
  • ID Field: the ID field in the database, needed to be able for the single GET and DELETE operation (this field maps to the :id in the URI, so address/4, the 4 is the key in the ID Field)
  • Select * in get: This switch means if we want specific fields or select * in the get (select * from address vs. select street, city, etc from address)
  • Do camelCase: This switch means if we want the post and put to do camelCasing for underscore fields

The textarea is the input from the SQL statement of the creation of your table, found in the screenshot. An example:

APEX SQL Workshop test table definition
SQL Tab of table
CREATE TABLE  "TESTORDSGEN" 
   (	"ID" NUMBER(16,0), 
	"TESTCOL1" VARCHAR2(128) COLLATE "USING_NLS_COMP", 
	"TESTCOL2" VARCHAR2(128) COLLATE "USING_NLS_COMP", 
	"TESTCOL3" VARCHAR2(128) COLLATE "USING_NLS_COMP", 
	 CONSTRAINT "TESTORDSGEN_PK" PRIMARY KEY ("ID")
  USING INDEX  ENABLE
   )  DEFAULT COLLATION "USING_NLS_COMP"
/

CREATE OR REPLACE EDITIONABLE TRIGGER  "BI_TESTORDSGEN" 
  before insert on "TESTORDSGEN"               
  for each row  
begin   
  if :NEW."ID" is null then 
    select "TESTORDSGEN_SEQ".nextval into :NEW."ID" from sys.dual; 
  end if; 
end; 

/
ALTER TRIGGER  "BI_TESTORDSGEN" ENABLE
/

The first button Parse will make sure the data is correct and set the Tablename and procedure fields.
The second button Convert/Create will create the output SQL statement.

The two resulting SQL statements in the output should be runned seperately in the SQL worksheet or SQL commands tool seen in the screenshot. Running them in one statement is still one of the stretch goals 😉

Important! The first time a module is created one should run both the SQL Output in the “SQL Modules” section AND the “SQL Procedure” section. When the module already exists, run the normal “SQL Output” section AND the “SQL Procedure” section. If you do not do this, ORDS will first create an empty module and you will lose all your previous work.

Procedure SQL Script
ORDS SQL Script

The output of the tool that is in the screenshots:

create or replace procedure ords_put_testordsgen_prc
 (l_id IN NUMBER,
 l_testcol1 IN VARCHAR2,
 l_testcol2 IN VARCHAR2,
 l_testcol3 IN VARCHAR2)
 is
 begin
 update TESTORDSGEN alias_testordsgen set
 alias_testordsgen.testcol1 = nvl(l_testcol1, alias_testordsgen.testcol1),
 alias_testordsgen.testcol2 = nvl(l_testcol2, alias_testordsgen.testcol2),
 alias_testordsgen.testcol3 = nvl(l_testcol3, alias_testordsgen.testcol3)
 where alias_testordsgen.ID = l_id;
 end;
DECLARE
l_roles     OWA.VC_ARR;
l_modules   OWA.VC_ARR;
l_patterns  OWA.VC_ARR;
BEGIN

ORDS.ENABLE_SCHEMA(
p_enabled             => TRUE,
p_schema              => 'ENEXIS',
p_url_mapping_type    => 'BASE_PATH',
p_url_mapping_pattern => 'enexis',
p_auto_rest_auth      => FALSE);

ORDS.DEFINE_MODULE(
p_module_name    => 'standaard_aansluitingen',
p_base_path      => '/sa/',
p_items_per_page => 25,
p_status         => 'PUBLISHED',
p_comments       => 'VBCS Generated Module');

ORDS.DEFINE_TEMPLATE(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen/:id',
p_priority       => 0,
p_etag_type      => 'HASH',
p_etag_query     => NULL,
p_comments       => 'VBCS Generated resource');

ORDS.DEFINE_HANDLER(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen/:id',
p_method         => 'DELETE',
p_source_type    => 'plsql/block',
p_mimes_allowed  => '',
p_comments       => 'VBCS Generated delete handler',
p_source         => 'delete from TESTORDSGEN where ID = :id');

ORDS.DEFINE_HANDLER(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen/:id',
p_method         => 'GET',
p_source_type    => 'json/item',
p_mimes_allowed  => '',
p_comments       => 'VBCS Generated get handler',
p_source         => 'select ID, TESTCOL1, TESTCOL2, TESTCOL3 from TESTORDSGEN where ID = :id');

ORDS.DEFINE_HANDLER(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen/:id',
p_method         => 'PUT',
p_source_type    => 'plsql/block',
p_mimes_allowed  => 'application/json',
p_comments       => 'VBCS Generated put handler',
p_source         => 'BEGIN
ords_put_testordsgen_prc(:id, :testcol1, :testcol2, :testcol3);
END;');

ORDS.DEFINE_TEMPLATE(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen',
p_priority       => 0,
p_etag_type      => 'HASH',
p_etag_query     => NULL,
p_comments       => 'VBCS Generated resource');

ORDS.DEFINE_HANDLER(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen',
p_method         => 'POST',
p_source_type    => 'plsql/block',
p_mimes_allowed  => 'application/json',
p_comments       => 'VBCS Generated post handler',
p_source         => 'BEGIN
insert into TESTORDSGEN (TESTCOL1, TESTCOL2, TESTCOL3) values (:testcol1, :testcol2, :testcol3);
END;');

ORDS.DEFINE_HANDLER(
p_module_name    => 'standaard_aansluitingen',
p_pattern        => 'testordsgen',
p_method         => 'GET',
p_source_type    => 'json/collection',
p_mimes_allowed  => '',
p_comments       => 'VBCS Generated get handler',
p_source         => 'select ID, TESTCOL1, TESTCOL2, TESTCOL3 from TESTORDSGEN');

l_roles.DELETE;
l_modules.DELETE;
l_patterns.DELETE;

COMMIT;

END;

When the statements have been run, both the procedure and all the CRUD rest services should have been created!

Generated procedure
Generated REST services

One can then test this with POSTMAN. Do note that in the current tool version I changed the POST and PUT optionally with the do camelCase feature is checked, so that it could get automagically converted into lowerCamelCase. For example a post should look like this (the field is TEST_COL_UNDERSCORE in the database which should be used in lowercase when the do Camelcase feature is unchecked):

{
   "testcol1": "guns",
     "testcol2": "and",
     "testcol3": "rosez",
     "testColUnderscore": "somethin"
 }

Also notice that the filtering just keeps on working without a problem!

I hope you enjoyed this read and can use the tool in your own environment. Feel free to comment and ask me questions when needed. The tool is free for use, please do not copy & distribute it yourself but use my link.

Refresh posts of Jarvis Pizzeria

In the summer of 2017 till the summer of the year after Marcel wrote together with two other consultants 30+ blogs about Oracle PCS. In these blogs, just about all the different components of PCS have been reviewed. To close this era Marcel has written three summarizing refresher blogs in which all previous blogs are discussed again.

Below are the links to these blogs.

On to a new series of Jarvis Pizzeria blogs about OIC.

VBCS: Using cookies to save user preferences / user input from the UI

Jarvis serves cookies pizza!

Today we will be looking into two examples of using cookies when baking pizza. The first is a hard save of information through a save and get button on a webpage, the second is on the fly saving of information on a webpage (saving fields locally in cookies so a user can leave and re-enter a page without losing data).

This post will also include dynamically showing and hiding table columns, as well as using the afterNavigate event.

PRELUDE

Before starting that we need to realize that Jarvis is in the business of baking pizza and not cookies. This means we will import the cookie functionality using js-cookie:
https://github.com/js-cookie/js-cookie

Lucky for us js-cookie is written in AMD format so we can import it with requireJS (the only format that VBCS “understands”). You can read why here: https://requirejs.org/docs/whyamd.html

So normally we would import the js-cookie files inside the application. I am not a big fan of this however, so I always try to use requireJS towards an external location. I do not want to host my own files though, so after a single Google query I found the following:
https://cdnjs.cloudflare.com/ajax/libs/js-cookie/2.2.1/js.cookie.min.js

(Please note that cdnjs.cloudflare.com is a very reliable source of common JS libraries).

To be able to use this library inside VBCS we can import it by editing the Source View -> webApps -> your application -> app-flow.json.
Paste the following section:

"requirejs": {
  "paths": {
    "cdnJsCookie":"https://cdnjs.cloudflare.com/ajax/libs/js-cookie/2.2.1/js.cookie.min"
  }
},

Note that VBCS always checks .js files automatically!! So you have to remove .js from the URL.

PART 1

Now we are ready for our first page, saving a preference or information by a button. To do this, we will first create a second page called main-pizza-list. Let’s leave it empty for now and create a button on main-start with the text “Navigate to pizza list”.

Do a standard event ojAction and create an action chain (and rename the action chain to ‘navigateToPizzaListChain‘). Add a Navigate step, choose peer page and pick the main-pizza-list. You can try this out, and you can see in the URL that you navigate to the sub page.

Now before we continue we want a pizza list, the easiest way to do this is adding a BO. I’ve added a BO called Pizza, with columns name (string) and ‘hasCookiesOnPizza’ (boolean). I have filled the BO manually with some rows.

Now let’s design the main-pizza-list. Let’s add a switch component, with the text “Show ‘hasCookiesOnPizza’ column”, two buttons with the text “Save preference” and “Get preference”. Also add a table beneath that. When editing the table, go to the wizard and choose Add Data (it’s greyed out for me because I already did that).

In the wizard choose the BO Pizza, and then choose the id, name and ‘hasCookiesOnPizza’ columns. Ignore the third screen and finish the setup. It should now show you 4 rows in the design mode.

The next step is to define all variables, go to the variable page of the pizza list (it should already show ‘pizzaListSDP‘) and add an array of objects called ‘pizzaListColumns‘. Add the following code to the “default value” field:

[
  {
    "headerText": "id",
    "field": "id"
  },
  {
    "headerText": "name",
    "field": "name"
  },
  {
    "headerText": "hasCookiesOnPizza",
    "field": "hasCookiesOnPizza"
  }
]

Also add a variable ‘showCookiesColumn’ of type Boolean, and set the default value to true.

Now lets get back to the page, click on the switch and change the Data tab -> Value to: ‘{{ $variables.showCookiesColumn }}‘. Now switch to the Code view in the top right, and change the ‘columns=’ part of the table to: ‘columns=”{{ $variables.pizzaListColumns }}”
Now the page should still work, but we have binded our visual objects to our variables.

The next step is to add some logic! Let’s show and hide the column ‘hasCookiesOnPizza’ dynamically. Go to the variable page and select the ‘showCookiesColumn‘, now go the the Events tab to the right and add an event of type ‘onValueChanged‘. Rename the action chain to ‘onShowCookieChangeChain‘.

After this we can do some really smart stuff, but let’s keep the action chain really simple for now:
Add an if statement with condition ‘showCookiesColumn
Add an assign variable step in the true path that assigns ‘pizzaListColumns’ to the following value:

[
  {
    "headerText": "id",
    "field": "id"
  },
  {
    "headerText": "name",
    "field": "name"
  },
  {
    "headerText": "hasCookiesOnPizza",
    "field": "hasCookiesOnPizza"
  }
]

IMPORTANT!! Change the Reset Target to empty (otherwise VBCS adds above entries to ‘pizzaListColumns’ array instead of overwriting the array).
Add an assign variable step in the false path that assigns ‘pizzaListColumns’ to the following value (having removed the ‘hasCookiesOnPizza’ column):

[
  {
    "headerText": "id",
    "field": "id"
  },
  {
    "headerText": "name",
    "field": "name"
  }
]

Now go back to the page and press Live or Play, slide the switch and the column should disappear and reappear!!

The last step is to use cookie functionality. We start with the code first, as this is easier. Go to the JavaScript page of the main-pizza-list. Here we want to import our js cookie library, so we can use the library to save and get cookies. We do this by editing the top of the JavaScript file, where we import our “cdnJsCookie” and give it a name cookies to access inside this file:
define([“cdnJsCookie”], function(cookies) {

Now for the methods, add the following code:

/**
*
* @param {String} cookieName
* @param {String} cookieValue
*/
PageModule.prototype.SetCookie = function (cookieName, cookieValue) {
  var expiresInFiveMinutes = new Date(new Date().getTime() + 5 * 60 * 1000);
  cookies.set(cookieName, cookieValue, {"expires":expiresInFiveMinutes});
};

/**
*
* @param {String} cookieName
* @return {Boolean}
*/
PageModule.prototype.GetCookie = function (cookieName) {
  return cookies.get(cookieName);
};

Note that I have let the cookies expire every 5 minutes for testing purposes. For experience purposes, it is nice to show that auto-completion works inside the editor because of the AMD format. One can try and type: ‘cookies.‘ inside a method and wait a second, you can see that all methods provided by the js-cookie library will be listed.

Now we want to use this functionality in the UI. Create two events type ojAction and their action chains respectively. I have called them ‘saveShowCookiePreference’ and ‘getCookiePreferenceAndSetBoolean‘. Start with the ‘saveShowCookiePreference‘, add a Call Module Function step and pick ‘setCookie‘. Click on one of the field input mappings to the right, change cookieName to ‘vbcsJarvisPizzaShowCookies’ and link ‘showCookiesColumn’ on the left side to the cookieValue on the right side. This action chain is now finished.

The second action chain is the ‘getCookiePreferenceAndSetBoolean‘. Add another Call Module Function, call it ‘getCookieValue‘, map the cookieName to ‘vbcsJarvisPizzaShowCookies’ and pick Boolean as return type. Also add an assign variable step, and assign ‘getCookieValue’ to ‘showCookiesColumn‘.

That’s it! Now we have a working cookie setter and getter. Try running the page, disabling the cookies column, save preference, refresh the page and pushing the get preference button. The switch and cookie column should be disabled!! One can imagine this is extra powerful when it’s done automatically, which I will partially show in the next part of the blog.

PART 2

Imagine after you picked your pizza you are rerouted to our order form. Create a page called main-pizza-order-form, also add a button to the main application that is able to navigate us to this order form. Create a form on the main-pizza-order-form page with two input fields, name, address and extra order info. Bind them to three String variables name, address and extraInfo (see above for more info).

Now I find one save cookie action chain better, so create an action chain manually called ‘setCookieForField‘. Add one input variable type Object called eventDetails. Add event listeners to name, address and extraInfo String variables that call the ‘setCookieForField’ with the complete event mapped.

Now we need to call the set cookie library again, follow the above js-cookie steps for this page or copy paste it from your main-pizza-list page.

In the ‘setCookieForField’ action chain, do a callModuleFunction and pick the ‘setCookie’ method. As name do the following with expression on:
“vbcsPizzaOrderForm_” + $chain.variables.eventDetails.name

This will ensure that we get the variable name (so we are saving cookies ‘vbcsPizzaOrderForm_name‘, ‘vbcsPizzaOrderForm_address’ and ‘vbcsPizzaOrderForm_extraInfo‘). Map the ‘cookieValue’ to ‘$chain.variables.eventDetails.value

Now create an action chain called ‘getCookieForFields‘, for simplicity I have done this manually, one could do this with JavaScript dynamically. Add a Run in Parallel and add three call module functions calling ‘GetCookie‘, map them to ‘vbcsPizzaOrderForm_name‘, ‘vbcsPizzaOrderForm_address’ and ‘vbcsPizzaOrderForm_extraInfo‘. Add one assign variable step per branch, assigning the result of the module function to the correct variable.

Lastly we add an event on vbAfterNavigate and pick the ‘getCookieForFields’ action chain. We are finished with out page! It will now save all input when a value from one field is committed (so you can also fill just 1 input and it will save it inside the cookie). Try typing and reloading the page!

Collaboration: Jarvis Pizzeria

Jarvis Pizzeria is back! I’m proud to announce to be the newest member of the Jarvis Pizzeria legacy. What will this mean? This will mean I will be doing these blogs together with the highly esteemed Marcel van de Glind under the umbrella of Jarvis Pizzeria (which makes all examples uniform and a lot easier to understand). Marcel is a master of the on premise suite and growing towards a cloud expert. He will use that expertise to update his past posts towards the new and improved cloud versions, and posting new and exciting topics as well ofcourse. For more information check out his blog and his post:
https://myfmw.wordpress.com/

Marcel van de Glind

ICS: Conditional REST / JSON mapping in OIC

Mapping optional fields is quite fiddly in Integration Cloud Service (ICS) when using JSON. In this post I would like to show the steps i’ve taken to come to a solution, as I strongly believe that the road to a solution is also important to provide. Reading the whole post gives insight into the inner workings of ICS. If you only want the solution scroll down to the bottom of this post. There is also an example with Process Cloud Service (PCS) in this post.

Goal and problem description

The goal of my feature:
Do a PATCH to a REST Business Object in the VBCS database from inside a process

For this post:
Let’s say I have a reservation system for a restaurant and I want to PATCH the number of guests and/or the date/time of a reservation from a running Structured Process

So, I thought of what I would need for such a feature beforehand:

  1. We need the Business Object, it’s endpoint and the fieldnames (Reservation BO, fields: customerName (string), numberOfGuests (integer), inOrOutside (string))
  2. We need an integration that does a PATCH operation on the reservation BO, input is ID of the row in the URL and fields numberOfGuests and reservationTimestamp. We want to be able to do a PATCH /reservation/{ID}
  3. We would need to make the fields optional in the integration
  4. We need the activity that saves the Reservation Business Object key (the {id} part). For this example we assume that the part of the structured process to post the reservation is already there and maps back a reservationId in the processcontext
  5. We need to create an activity that calls the ICS integration to do that PATCH on the reservation BO.

Off I went to create all things above. After I finished 1 & 2 I decided to do some testing. I added a reservationBO manually and called my integration from postman. I did a PATCH to {id} used the following JSON body input integration trigger:

{
     "numberOfGuests": 2,
     "inOrOutside": "inside"
}

Used the following body to post to the DB invoke:

{
     "numberOfGuests": 2,
     "inOrOutside": "inside"
}

And used the following output for my whole integration (from the DB):

{
     "id": 0,
     "numberOfGuests": 2,
     "inOrOutside": "inside"
}

My PATCH worked. Success!!

Postman call

Off I went to make the fields optional. How to that you ask? Well we only can do XSLT mapping in ICS. We need to do an if statement AS PARENT of the node, thus:

<xsl:if xml:id="id_27" test="/nssrcmpr:execute/nssrcdfl:request-wrapper/nssrcdfl:numberOfGuests">
    <nstrgdfl:numberOfGuests xml:id="id_20">
          <xsl:value-of xml:id="id_21" select="/nssrcmpr:execute/nssrcdfl:request-wrapper/nssrcdfl:numberOfGuests"/>
    </nstrgdfl:numberOfGuests>
</xsl:if>

This should work, when a value is empty it will never map anything. I went to test the first optional field, sending only the amount of guests. This worked!! Nice.

Postman call
Database entries

Second test, sending only the inside or outside choice and.. my integration failed. I was baffled, what was going on?? I checked the monitoring and it, it gave no errors? The integration call was successfull. So what went wrong then..? This makes no sense!

Postman call

Or does it??

ICS converts EVERYTHING to XML. This is very important to understand when working with JSON. I traced the payload and found out that it still sends my numberOfGuests field but empty!!
Here is the payload:

<nstrgdfl:response-wrapper>
   <nstrgdfl:id>3</nstrgdfl:id>
   <nstrgdfl:numberOfGuests/>
   <nstrgdfl:inOrOutside>inside</nstrgdfl:inOrOutside>
</nstrgdfl:response-wrapper>

Why did it only fail then with numberOfGuests=empty and not with inOrOutside=empty? I checked the payload when only sending numberOfGuests filled and keeping inOrOutside empty and the result was the an empty node as well:

<nstrgdfl:response-wrapper>
   <nstrgdfl:id>3</nstrgdfl:id>
   <nstrgdfl:numberOfGuests>4</nstrgdfl:numberOfGuests>
   <nstrgdfl:inOrOutside/>
</nstrgdfl:response-wrapper>

So what’s the difference between numberOfGuests and inOrOutside? Then I saw it, postman gave an empty string return on the inOrOutside field instead of no field what one would expect. This is due to the empty XML node that the SAXParser of ICS writes. After checking the server logs the error there confirmed my suspicion, it could not convert to an integer! Is there a solution??

Solutions

There are two solutions. The first is converting the output of the REST database call AND the output of the complete service to JSON string instead of integer. The second is more elegant but more work, you need to do TWO if statements when mapping the output of the REST service to the output of the Integration. We also check numberOfGuests != ” as well because of the empty XML node, note that this is comparing a field to an empty string (also works if the field is NOT a string itself, however the SAXParser makes it an empty string regardless):

<xsl:if xml:id="id_41" test="$patchReservationDB/nsmpr0:executeResponse/ns6:response-wrapper/ns6:numberOfGuests">
    <xsl:if xml:id="id_44" test="$patchReservationDB/nsmpr0:executeResponse/ns6:response-wrapper/ns6:numberOfGuests != """>
          <nstrgdfl:numberOfGuests xml:id="id_29">
                <xsl:value-of xml:id="id_30" select="$patchReservationDB/nsmpr0:executeResponse/ns6:response-wrapper/ns6:numberOfGuests"/>
          </nstrgdfl:numberOfGuests>
    </xsl:if>
</xsl:if>

(There are actually three solutions, the last one is a database trigger that always fills the numberOfGuests with 0 instead of empty in the DB. In this example that is feasable, in the real life situation that this example is based on that was not an option)

Nice! So my integration works (from postman). Please note that these errors also occur when we have a GET with an optional integer in the resultset.

Postman call

So what happens when I call this from a PCS activity and map these fields optional? Patching both fields works fine, Patching numberOfGuests works fine, patching inOrOutside… does not, again the numberOfGuests is mapped empty in the input of the integration giving an error on the input side of PATCH on the REST DB call. The DB does not like an empty string in an integer field and gives an error (rightfully so).
So we need to include the empty check on the input side as well, giving:

<xsl:if xml:id="id_27" test="/nssrcmpr:execute/nssrcdfl:request-wrapper/nssrcdfl:numberOfGuests">
    <xsl:if xml:id="id_34" test="/nssrcmpr:execute/nssrcdfl:request-wrapper/nssrcdfl:numberOfGuests != """>
          <nstrgdfl:numberOfGuests xml:id="id_20">
                <xsl:value-of xml:id="id_21" select="/nssrcmpr:execute/nssrcdfl:request-wrapper/nssrcdfl:numberOfGuests"/>
          </nstrgdfl:numberOfGuests>
    </xsl:if>
</xsl:if>

Finally a working optional patch integration. This is very cumbersome to do for every field though if your entity has for example 15 optional fields, so I really hope there will be an out of the box solution for this in the future! If that happens I’ll will of course post that on this blog again so.. subscribe!

OIC: Oracle Integration Cloud, what is it?

After some feedback I have decided to write a post that gives a small summary of what OIC is and how one can position it within the ecosystem of an enterprise. I will first give an overview of the three main features of OIC, then some of my own perspective on strengths and weaknesses of the products and closing with positioning OIC in an enterprise ecosystem.

Overview trifecta (PCS, ICS, VBCS)

Process Cloud Service (PCS)

The Oracle Process Cloud Service is where one can create structured and dynamic processes. This is all written in BPMN so it should be understandable for the business. All processes have an input and output, where activities can be both automatic or human (tasks) as PCS comes with out of the box task management. It also has the strength of being able to develop both structured and dynamic processes. Where structured processes are deterministic down a pre-defined path, dynamic processes have “prerequisites” for each activity. Each time data changes inside the dynamic process all the prerequisites are re-evaluated (it does know phases to keep this check small). When having BOTH PCS and Integration Cloud Service (ICS) one can use all the integrations very easily from the processes, these are updateable as well by just a few clicks. Working together with ICS it is also possible to have correlation “wait” steps, that wake up when the correct correlation ID is given to that endpoint. PCS impresses here as well as these processes are not necessarily in memory (smart hydration / dehydration method). When modelling processes it is important that these are modelled in such a way that they have one goal and are easily expandable and interchangeable (separation of concerns!!). We can achieve this because it’s very easy to call subsequent processes from within a process or call back to an overlaying orchestration process.

Example: Orchestration dynamic process A is to enable a customer to make a reservation for a restaurant. The input is the contact info, date/time and number of guests. Structured process B is in charge of checking if the reservation is correct: it checks if the date/time is correct and if the amount of people is not exceeding the limits. In case any of these are wrong, a human task is triggered with a UI screen on it that shows both the contact information and editable date/time and editable number of guests. The screen also has a cancel button. If a human task is triggered, the structured process B automatically waits until the task is finished. Process B terminates afterwards, and a call back is sent to the Orchestration process A with one result: the reservation is cancelled or not. Orchestration process A has another activity Structured Process C that has a prerequisite: activity Structured Process B is finished AND reservation is not cancelled. This Structured Process C is in charge of writing the reservation to the database and emailing the guest that the reservation is complete successfully.

Integration Cloud Service (ICS)

The Integration Cloud Service is where one can create integrations with other systems, integrations with CRUD operations to read or write to the database and integrations to send notifications. It also has several adapters that do a lot of work for the user. ICS uses connections (trigger and invokes) and integrations. Each integration must have a trigger (input) and can use any amount of invokes inside the integration (calls to other services). ICS works best with app-driven integrations, which are a sort of mini structured processes. I would highly recommend trying to keep business logic OUT of integrations as much as possible (we have PCS for that right? We only want one place for business logic).

Visual Builder Cloud Service (VBCS)

The Visual Builder Cloud Service has to components, Business Objects and Apps. Business Objects are a sort of out of the box database with added functionality like DB triggers, validation rules and automatically generated REST connectors. It also has some great ways of being able to model relationships between entities in a really simple way. When modelling these relationships one can even define accessors between entities. The other part of VBCS are the Apps. Apps are applications that are designed in a Rapid Application Development way (80% drag and drop, 20% custom). What VBCS does very well is that it enables users to easily drag and drop components and binding that to action chains. These action chains are very powerful in what they can do, for example calling integrations or even starting a PCS deployed process! The VBCS pages also support custom JavaScript calls. The apps also work really well with the Business Objects of course where pages can be auto generated based on the Business Object schema.

Strengths and weaknesses

PCS

Strengths in my opinion:

  • Dynamic processes are not always possible in other BPM tools
  • Dynamic processes have phases
  • Dynamic processes have optional and / or repeatable activities
  • Processes have security per role or group (as swimlanes!)
  • Possibility to both use JSON and XSDs to read business types
  • Possibility to easily consume integrations that are modelled on ICS
  • Possibility to easily call other processes easily
  • Strong data mapper with transform possibilities for each input/output activity
  • All processes have auto generated endpoints when activating
  • Task management is strong out of the box
  • A lot of API’s can be used for PCS
  • Strong “tracking” functionality of the state of the process or where something went wrong
  • Strong correlation of wait steps

Weaknesses in my opinion:

  • Human tasks can use API’s and can also have an external UI URL (as to have custom task screens), this does not work very well yet (for example cannot send a payload when finishing a task)
  • Integration with Visual Builder Cloud Service (VBCS) could be stronger, for example possibility to include VBCS pages inside tasks or make specific “task pages” inside VBCS that have some features out of the box, like claiming, approving, reassigning tasks
  • Fiddly with data types (converting integers, doubles, strings is very error prone when using PCS and ICS together, especially when having optional fields)
  • API’s could be better documented (with more examples)

ICS

Strengths in my opinion:

  • Connections and integrations are separated so that connections can easily be changed (base URL for example)
  • Works very well with PCS
  • Able to have multiple operations inside one integration
  • Able to do XSLT mappings including XPATH2.0
  • Able to choose pre-defined system users from drop downs

Weaknesses in my opinion:

  • Optional fields do not work well in ICS, need a very cumbersome IF check
  • Obscure error messages (“something went wrong”)
  • Activity stream is not very clear, often I just want to see: “I sent this payload to that url”, this information is hidden
  • Connections do not have a very easy “where used” functionality
  • Difficult to see which connections are used when editing an integration
  • Auto mapping button (Recommend) is very bad, when I have two identical structures left and right, it maps 0 elements when not in the same namespace (also JSON!)
  • Mapping UI is very different from PCS’s mapping UI
  • Complex mapping (for-each for example) very difficult

VBCS

Strengths in my opinion:

  • Business Object modelling is very easy (also UI for relationships and even auto drawn diagrams)
  • Business Objects have auto generated REST connectors INCLUDING accessors
  • Business Objects that are related can be called through either accessors or expand=all query parameters
  • Business Objects that are related can be deep inserted through an integration
  • Business Objects can use GroovyScript when doing database triggers and business rules
  • Business Objects can have pre-determined data in dev, stage and live
  • Apps are easily made and fast to prototype
  • Action chains have a lot of features
  • Apps can call custom JavaScript code
  • Custom reusable JET components possible
  • Mostly REST / JSON based

Weaknesses in my opinion:

  • Business Objects do not support views or joining of tables
  • No easy use of JavaScript libraries over applications (see other blog post)
  • Apps can get complex easily, as the flows are not very intuitive (the necessity for subflows is because of missing features in navigation)
  • Mapping inside action chains is a different UI from ICS and PCS
  • Many features not well documented (yet)
  • No bring your own database (yet)

Please note that with a lot of these weaknesses Oracle is very helpful in trying to fix them and placing them on the roadmap.

Using OIC inside an enterprise ecosystem

I do see OIC as a mid-to-large enterprise solution, often in an enterprise that has a lot of other flanking apps and possibly an ERP system. I think OIC should be the process orchestrator for business processes, being started by either a user or a service call through ICS (either from internally or externally). This orchestration process then uses either human tasks (connecting to the user) or automated activities often using ICS connecting flanking apps or the outside world. In my opinion this should always go through a API Management tool. When connecting an ERP system, we found that using an API management platform is not very practical. We would rather create webservices on the ERP system and expose those, subsequentially consuming these services in ICS. We like to do these ERP calls synchronously, while other integrations could be done asynchronously (the process supports this by correlation ID’s on waiting steps). When we use such an orchestration process, we should be careful to not put too much data inside that running process context. A much better solution is saving that data in a database (VBCS Business Objects) while the process knows the “keys” to those business objects. As for human interaction, all should be done through VBCS, either interacting with tasks or with the business objects.

Two notes I would like to make. The first is that VBCS could be interchanged with any UI framework, as the API’s used to call PCS and ICS are generally available. Integrations and processes are much more easily called from VBCS action chains though (no need to call/code APIs yourself). The second is that the VBCS BO’s could easily be interchanged with ANY database (preferably with a REST adapter on top), which probably gives more flexibility and options (views and joining of tables, creating reports).

My recommendation would be to definitely use a custom database instead of VBCS Business Objects from the start. Extra intel: a little birdy told me that Bring Your Own Database could be a real possibility in the future (Business Objects would be on top of your own database).

VBCS: JavaScript usage, challenges, architecture and governance for enterprises

Oracle Cloud Integration (OIC), Visual Builder Cloud Service (VBCS)
JavaScript usage, challenges, architecture and governance for enterprises
Including smart debugging for action chains

In our current project we are using OIC as a combination of: Process Cloud Service (PCS), Integration Cloud Service (ICS), Oracle Field Service Cloud (OFSC) and Visual Builder Cloud Service (VBCS). This is a two-year project with over 200 people in the project teams. Some five of those (agile) teams work with OIC and this poses some challenges with regards to architecture and governance. This post shows how we mitigate the risks of custom code and bring the application to operations successfully.

Let’s go through some of the project’s history first, the project began with an architecture of no-code in VBCS with custom JET components is where custom code could be placed. After a few months we already saw that this was not a feasible solution as VBCS is not even designed as a no-code product and the custom JET components are UI components. Something like the mapping of a custom structure required JavaScript, which is not allowed under the chosen architecture. This is the reason why we needed to create a model with which we could maintain control of the custom code.

Important! Most of this post still remains true, however, the /latest is not on the roadmap as of this moment. Our project will use the same strategy, but we are moving towards either a CDN approach or a Developer Cloud Service build that pushes our libraries to a location where we can host the static JavaScript files.

NB: Oracle has positioned VBCS as a Rapid Application Development (RAD) framework with a low-code environment, but not as a no-code environment.

We set off to create a plan to tackle this problem. First, we identified the categories of JavaScript we need with each it’s own challenges and recommendations.

Categories of JavaScript

  • Data mapping, takes an object as input and returns an object as output
    • Example: for a filter of a Service Data Provider we use a complex filter with multiple $contains which we want to map
    • We should try and avoid using custom code for mappings that call a create, update or delete as we want those actions to be transparent
  • Front end data manipulation, this is data manipulation on the client side. Data is never manipulated on the backend.
    • Example: quick search of a list of expected size (say we know the list is of a certain small size so that we can always get all items to the frontend), where when you begin to type the list is filtered
    • Example #2: date time manipulation (say I want to add one week to a date?), here we want to use something like momentJS
  • HTML component changes, changes to page layout or elements
    • Example: create a child <li> under an <ul>
  • API calls to 3rd party libraries or components
    • Example: calls to the GoogleMaps API when you have such a map component on your page
  • Cookies, we want to save and read functional cookies
    • Example: we want to save the column show/hide preferences of the user
  • What we never want to do:
    • Any HTTP / HTTPS calls directly from JavaScript code, these calls will not go through the Oracle proxy. This would give all sorts of problems with CORS, authentication and using the client devices proxy.

After we identified each of these categories and decided on how we wanted to use these inside the application we went on with another important question. How are we going to apply this to VBCS??

Complexity in enterprises

We had already created several applications when we realised the complexity of using VBCS within an enterprise:

  • Multiple applications (5 teams working in OIC!)
  • Multiple places where JS code can exist
    • Shell
    • Application
    • Flow
    • Page

So what happens when an error occurs? Or when we want to change for example the date format throughout our whole organization? The answer to these questions? You must dive into each application. If an error occurs on a certain “screen” you have to check the page, the flow, the application and the shell for the code. For example if we would want to change a date format inside a JavaScript method formatDate(date) on application level, we’d have to edit each application within our enterprise. This is when we decided to search for a different solution, custom JavaScript libraries. This took some time to find out how these work inside VBCS, but is possible to include own or 3rd party libraries in a clean way (I will show that later). These libraries have their own set of requirements though.

Custom JavaScript library requirements

  • custom JS libraries can be easily managed by the teams, with generic and team specific libraries
  • custom JS libraries can be easily managed by operations
  • custom JS libraries are versioned (repository)
  • custom JS libraries are hosted on a high available location (preferably at least as available as VBCS) and have no CORS problems
  • custom JS libraries can be unit tested
  • custom JS libraries are stateless (only input and output parameters, no states, can only access global variables that are NOT changed by code, example follows later)
  • custom JS libraries are written in AMD export format so that VBCS “understands” them
  • custom JS libraries should be documented (for example JSDoc format)

After a lot of thinking we created a solution which ticks all of our boxes. Let’s see how we include a custom library in VBCS (we do this only for staging in this example).

Step 1: Create a JavaScript libraries VBCS application (I called it GN_JavaScript_Libraries)

This application will encapsulate all of our custom JavaScript libraries. Why do we do this in VBCS? Because it ticks many of the requirements of course! We can do versioning in VBCS, it is hosted as the same availability as the VBCS application, we have no CORS problems and can be easily managed by our teams (everybody can already access VBCS!).

Create a web application as well, I created myJavaScriptApplication for the example.

Step 2: Create your folder structure and upload it to the JavaScript library

Create a folder structure on your disk, this needs the folder structure of the source view, for the WEB application myJavaScriptApplication:
webApps/myJavaScriptApplication/resources/lib/gn/development.js
webApps/myJavaScriptApplication/resources/lib/gn/visual.js
webApps/myJavaScriptApplication/resources/lib/sa/something.js

Where GN in our enterprise is the short notation for generic and SA for one of our teams.
ZIP this and use the VBCS import function to import this ZIP into the application. With this folder structure we already have some information about what each of the files do.
What this also accomplishes is that operations does not need insider VBCS knowledge, they only need to be able to navigate to each of these .JS files in VBCS to change code.

Step 3: setting some initial JS code

Let’s put some code in one of the libraries. We can just navigate from VBCS now from the source view to the resources folder and edit the .js files in the inline VBCS editor.

In the development.js copy paste the following code (ticking the Asynchronous Module Definition (AMD) export and documentation requirements):

define([], function() {
     var exports = {};

  /**
  * Logs a line to the client console
  * @author Rick Kelder
  * @param {object} The object to be logged to the console in default format
  */
  exports.LogMe = function(objectToBeLogged) {
    console.log("ApplicationLog: ", objectToBeLogged);
    return;
  };

  return exports;
});

Step 4: Getting the right URL

To be able to complete this step we need some additional information, as well as we need some explanation as to how VBCS works. Each time an application is staged or put live, the folders in the source view are NOT published to the stageURL, but rather to a auto generated version. So each stage action will also cause this generated version number to change. So how do we know this generated version?

Stage the application and run the empty myJavaScriptApplication from staging, use F12 and see the first network response (js/ it’s called in chrome) and open the response. There is a BASE_URL_TOKEN in here, this ID is what we need (we can also see this ID when clicking a file like for example shell-page.js, we see that the request URL has the same generated version number).

Step 5: Putting the right URL inside a consuming application

VBCS can require libraries in the application-flow.json (source view -> webapps -> app), this can be done by creating the following piece of code. We include two custom libraries in this example (I always put this part of the code just above the “security:{” line) and use the correct version number:

"requirejs": {
  "paths": {
	   "gnDevelopment":"../../../../../GN_JavaScript_Libraries/1.0/webApps/myJavaScriptApplication/version_1562228076000/resources/lib/gn/development"
	}
},

The left side is the short notation for each library which we will use in the consuming application.

Step 6: calling the LogMe function from inside an action chain

So how do we call this from an action chain? We need to define a VBCS JavaScript function for this as the action chain does not “understand” external functions (yet). I have chosen the application level for consuming my LogMe function so I can use it throughout all action chains of my consuming application and paste the following in the JS part of the application:

/**
   * LogMe
   * @param {String} objectToBeLogged
   */
  AppModule.prototype.callLogMe = function (objectToBeLogged) {
    gnDevelopment.LogMe(objectToBeLogged);
  };

This will cause an error. Why is that? Because we haven’t defined gnDevelopment in this context yet. We can fix this by changing the first line of the code:
define(["gnDevelopment"], function (gnDevelopment) {

Now we can add a call function module step in an action chain and let it perform callLogMe with something like “test” mapped to objectToBeLogged. If we run the application and trigger the action chain we will see a successfully logged message. We have successfully included our own JS library!

Extended

We also have a really nice way of changing code throughout EVERY application. We will give an example using a date time format that we can change throughout the complete enterprise (step 7) which will also use a 3rd party library momentJS. We will also use the library call inside the html code of the application.

We have some really smart ways to use logging inside of action chains. Normally we have a log step in the action chain which writes a message to the chrome console. When we go to live we dont want this logging to occur so we remove this logging when the action chain works. When we need to update the action chain though we need to readd those logging steps if we need them. We have a clean solution for this (step 8).

Step 7: including momentJS and formatting a date

In this step we want to include momentJS inside our custom JavaScript library. To do this we edit the application-flow.json FROM THE LIBRARY ITSELF as follows:

"requirejs": {
	  "paths": {
       "moment":"https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment-with-locales.min"
    }
  },

Please note that this does NOTHING for the consuming application, as this will try and read moment from it’s current path when it is not loaded, so:
myJavaScriptApplication/version_1562228076000/resources/lib/gn/moment

We need to use moment inside our own custom library, this time I am editing visual.js where we include moment in the top line:

define(["moment"], function(moment) {
    var exports = {};
    
    /**
     * Formats a date into one consistent format
     * @author Rick Kelder
     * @param {string} The date to be converted, must be parseable by momentJS
     * @return {string} A formatted date
     */
    exports.FormatDate = function(date) {
        return moment(date).format("lll");
    };

    return exports;
});

Now we can stage the custom JavaScript library again. We need to run it and get the right version, and edit our consuming application:

"requirejs": {
	  "paths": {
	     "moment":"https://cdnjs.cloudflare.com/ajax/libs/moment.js/2.24.0/moment-with-locales.min",
       "gnDevelopment":"../../../../../GN_JavaScript_Libraries/1.0/webApps/js/version_1562328770000/resources/lib/gn/development",
       "gnVisual":"../../../../../GN_JavaScript_Libraries/1.0/webApps/js/version_1562328770000/resources/lib/gn/visual"
    }
  },

Note that we need to include moment in the consuming application because of reasons mentioned earlier.

Let’s define a callFormatDate on application level again, by including the visual and adding a JavaScript function in the application JS:

/**
   * Converts date to moment
   * @param {String} date
   * @return {String}
   */
  PageModule.prototype.callFormatDate = function (date) {
    return gnVisual.FormatDate(date);
  };

Now add HTML and do a bind text to the following (for example using a standard Business Object of VBCS that has the field lastUpdateDate):

<oj-bind-text value="[[ $application.functions.callFormatDate($current.data.lastUpdateDate) ]]"></oj-bind-text>

Run this consuming application and see how moment will format the date in the way you have defined.

Now imagine that we have 50 applications and someone says: “hey I want to change how all the date formats look”, this would be impossible without our custom JavaScript library solution. With our solution however, we can easily change the date format in ONE spot (at this moment we do need to update the version numbers, but that’s it). One can only imagine how much more powerful this pattern is with more complex solutions that consist of multiple lines of code (for example quicksearching arrays). It will also ensure one consistent behaviour of all applications.

Step 8: smart debugging

When we have an action chain and we call the LogMe step, we get an entry in our chrome console. This costs performance, and it is not something we want for live versions. How do we fix this? Remember the LogMe step from step 6 and the “exception” we have been talking about?

Let’s introduce module variables (we should NEVER change this from runtime, but ONLY as a configuration parameter).
I will use this variable to be able to set a debugMode:

define([], function() {
    var exports = {};

    /**
     * Boolean to control if development functions should do debugging output
     * @constant
     * @type {string}
     */
    exports.debugMode = true;

    /**
     * Logs a line to the client console when global flag debug mode is on
     * @author Rick Kelder
     * @param {object} The object to be logged to the console in default format
     */
    exports.LogMe = function(objectToBeLogged) {
        if (exports.debugMode)
            console.log("ApplicationLog: ", objectToBeLogged);
        return;
    };

    return exports;
});

When we want to use this we must stage the custom library again, get the version and update that in our consuming applications. We do however get a nice bonus! We can switch a flag to enable all custom chrome logging, in ALL our applications at once!

Ofcourse we don’t want to do this manually, so in our pipeline we will do a automatic replace on exports.debugMode = true;

Requirements left

  • custom JS libraries are stateless (only input and output parameters, no states, can only access global variables that are NOT changed by code, example follows later)
    • We can solve this requirement by having all the developers listen to us. The exception is in the extended work.
  • custom JS libraries can be unit tested
    • We have not yet implemented this yet, but how I envision this is that we will use a 3rd party library like qunit (also test framework for custom JET components). It would be great if we can run that test with each staging or build, but also manually when running the application manually (and making a VBCS screen with green ticks for each test!)

Caveats

  • Right now we need to change the requireJS versions EACH time we stage the JavaScript library application. This is not really that bad, as this will force us to check the consuming application each time we change a library. A /latest symbolic link would be a really nice addition from Oracle though.
  • When we use requireJS includes inside any of the custom libraries itself (see extended step 7) we need to include it in the consuming application as well, otherwise the pathing goes wrong as it will try and resolve the standard path which is the same as the custom libraries own path (please let me know if anyone knows a solution for this to be able to use the application-flow.json settings of the custom JavaScript library!)

Thanks for reading this post.

Thanks to Coen Elbersen, Jan Kettenis, Rutger Saalmink and Kees Terburg
Thanks to Oracle for answering questions regarding VBCS