Monday, July 21, 2008

Deploying Updates to Production Applications

While at ODTUG in June, I had the pleasure of attending John Scott's ApEx Best Practices session. For me that was a real treat as Steve Feuerstein's PL/SQL best practices is one of my all time favorites. One of things I remember most about the session, and I can only use my memory as I can't seem to find the presentation from the conference, was a tip on deploying updates to applications in production while minimizing down time - down to a few minutes.

The technique was simple enough and I'm a big fan of simple during deployments. The prerequisite was use of Apache over the EPG (Embedded PL/SQL Gateway) although there are some ways around that (more later). The steps involved went something like this (pulling from memory):

  1. Deploy the updated application into production while leaving the current application in place. Yes, this means it will get a new APP_ID and the APP_ALIAS if not changed will get the APP_ID appended to it.
  2. Get the updated application working. Easy enough right! Get it up while the other application breaths its last breaths.
  3. When the updated application is up and running 100%, use an apache rewrite rule to redirect traffic from the old application to the new one.
  4. Then, when the time is right, remove the old application and modify the updated application's APP_ALIAS to what it should be.

It is step 3 that requires Apache. However, if your users access your applications via a centralized link (on a portal) and that link uses the APP_ALIAS and not the APP_ID (as all your links do, right?), then users will be redirected after you've completed step 4 anyway. However, you might want to delay removing the old application and instead just alter its APP_ALIAS (so that you can use it in the updated app) and then set its status to "Unavailable" with a message that includes the correct link. Providing the correct link will help those that saved the link in their browser after it resolved to the APP_ID.

Now, why did I just reiterate all of that? Well for one, I think it's a great technique that may benefit a lot of people. But I remember thinking to myself and even asking the question, "What will happen to Interactive Reports saved in the application being replaced?" I thought I had issues with that and made a mental note to follow up on it later.

The fact is that if your users have saved Interactive Reports in the old application, then you'll need to add one more step to the migration process. Thanks to Brian, I now know the table holding the Interactive Report data is named WWV_FLOW_WORKSHEET_RPTS (in the FLOWS schema). In this table there are two columns that will need to be updated: WORKSHEET_ID and FLOW_ID. I created the following procedure to help with this process (note that if an interactive report was modified as part of the deployment or if the interactive report was moved to another page, a more manual approach will be required. Also, UPDATE will need to be granted on WWV_FLOW_WORKSHEET_RPTS to the executing schema.

create or replace
PROCEDURE transfer_apex_rpts (
   p_old_app_id IN NUMBER
 , p_new_app_id IN NUMBER
)

IS

   l_new_worksheet_id NUMBER;

   FUNCTION worksheet_id (
      p_flow_id IN NUMBER
    , p_page_id IN NUMBER
   )
   
     RETURN NUMBER
     
   IS 
   
      l_worksheet_id NUMBER;
   
   BEGIN
      
      SELECT id
      INTO l_worksheet_id
      FROM flows_030100.wwv_flow_worksheets
      WHERE flow_id = worksheet_id.p_flow_id
         AND page_id = worksheet_id.p_page_id;
         
      RETURN l_worksheet_id;
      
   END;

BEGIN

   IF p_old_app_id IS NULL
   THEN
      raise_application_error('-20001', 'p_old_app_id must be NOT NULL');
   END IF;

   IF p_new_app_id IS NULL
   THEN
      raise_application_error('-20002', 'p_new_app_id must be NOT NULL');
   END IF;
   
   FOR x IN (
      SELECT DISTINCT worksheet_id, page_id
      FROM flows_030100.wwv_flow_worksheet_rpts
      WHERE flow_id = p_old_app_id
   )
   LOOP
      l_new_worksheet_id := worksheet_id(p_new_app_id, x.page_id);
      
      UPDATE flows_030100.wwv_flow_worksheet_rpts
      SET worksheet_id = l_new_worksheet_id
      WHERE worksheet_id = x.worksheet_id;
   END LOOP;
   
   UPDATE flows_030100.wwv_flow_worksheet_rpts
   SET flow_id = p_new_app_id
   WHERE flow_id = p_old_app_id;

   COMMIT;
END;

If nothing else this procedure can provide some insight as to how this can be approached. As always, let me know if you have any questions. This is new to me, so I'm sure the procedure will evolve with time. As it does, I'll update it here.

Saturday, July 12, 2008

tapiGen demo app

I was recently asked if I could put together a demo application to show how the use of tapiGen makes it easier to build some simple pages in ApEx. I thought about it for a while but ultimately decided it was not a very easy task, in fact, it's currently more difficult! No one can really argue against how simple the dev team at Oracle has made building simple forms. The trade off, however, is control.

When I first created tapiGen, I was hoping that Oracle had an API through which I could automate the creation of pages and content, much like they've done when you use the wizards to create pages and regions. The idea was to provide an alternate means thru which a form could be created that would use tapiGen for processing over the Automated Row Fetch and Automatic Row Processing. However, no such API exists and it's unclear if one ever will. (the most I can do in tapiGen is to generate the code which can then be copied and pasted into ApEx processes - I'll do this in version .3).

There are, however, some compelling reasons for using tapiGen in ApEx:

  • Automatic Row processes are only useful for simple forms that feed off of one table. If you want to work with multiple tales on the same page you'll need to create code to perform the fetches and DML. tapiGen will help keep this "custom" code to a minimum while providing other functionality.
  • Using the Automatic Row processes requires up to 4 additional page items for the audit columns: PXX_CREATED_BY, PXX_CREATED_ON, PXX_MODIFIED_BY, PXX_MODIFIED_ON. These page items then need to be properly configured so that 1) they are not shown/made editable to the end user and 2) their values are properly set on submittal to update the columns as needed. In JDeveloper this can be automated, so I figured why not in ApEx. With tapiGen audit columns are handled for you so the items are not necessary.
  • There is only one "success message" displayed for Automatic Row Processing (DML), which can handle three types of operations, such as "Operation Successful". I wanted to display more contextually accurate messages such as "Contact Created", "Contact Modified", and "Contact Deleted". This requires three separate page processes, one for each DML operation, but using tapiGen keeps the amount of code required to a minimum.
  • Errors are not "handled" using Automatic Row processes, only displayed to the end user. If a row fetch fails I want to know about it. tapiGen provides a consistent method thru which errors are written to a table called ERRORS. The row inserted includes a lot of information such as the application alias, application id, page id, app user, and time during which the error occurred along with other information. This can be very useful when trying to figure out what went wrong. If you want to take it to the next level, a trigger can be placed on the errors table with the necessary code to email the group/person responsible for maintaining the application that includes all of the aforementioned information. Proactive maintenance, oh yeah! ;)

The truth is that table APIs are not ApEx specific. Many corporations that have never heard of ApEx use TAPIs to help ensure that PL/SQL code is written in a consistent and efficient manner that makes maintenance much easier.

Steve Feuerstein first got me into TAPIs in his book PL/SQL Best Practices. In fact he offers a TAPI generator as a part of the Quest CodeGen Utility that I'm still working to catch up to in some ways. But I wanted a TAPI generator that made working with ApEx easier, included some additional functionality for working in web based environments, and didn't require a JRE to work (so it could be used in XE). Also, Steve is not currently/actively maintaining the generator, so new DB features such as the function result cache in 11g will not be taken advantage of, at least not until he decides to do it.

tapiGen is open source and I don't plan to stop updating it (of course I never planed on making it either). The fact is that I use it all the time so I have to keep making it better. For example, the next version will make DML with collections much easier - something I'm really looking forward to! I would love to have others helping to maintain the code although I don't know if that'll ever happen. I'd also love any feed back as to how it could be made better. Hopefully by the time version 1 (production/stable) is released the project will help meet a lot of peoples needs.

Having said all this, I feel like I'm coping out... I'll tell you what, when version .3 is released, provided there's some interest, I'll put out an example application that demonstrates the use of tapiGen in ApEx.

Saturday, July 5, 2008

tapiGen Update Released

I just released a new version of tapiGen available here:

There were a few important changes along with some few trivial ones. Among the more important are:

  • Added support for function result cache to the RT and RT_FOR_WEB_UPDATE functions. Applications already using these functions in 11g will see a performance benefit although nothing changes for 10g.
  • Global variables added for audit column names. These variables allow TE packages to be created for tables that do not use the default names: CREATED_BY, CREATED_DATE, MODIFIED_BY, MODIFIED_DATE.
  • Requirement for a table to have primary key column named "ID" has been removed. The column will be looked up and the TE package created accordingly.
  • MD5 function created in TE packages now uses DBMS_CRYPTO.HASH over DBMS_OBFUSCATION_TOOLKIT.MD5. This allows for hashes of rows that contain over 32767 bytes of data but requires the execute grant on DBMS_OBFUSCATION. This function now takes all columns into account and can handle both CLOBs and BLOBs.
  • Added DBMS_OUTPUT calls to provide feedback on package creation.

I've already got a few things in mind for version .3:

  • Overloading the INS, UPD, AND DEL functions to accept associate arrays. This alone will be a great time saver as it can be quite tedious in 10g and before the declare the arrays for each column, set their values, and then pass them in. There's also a performance impact. This restriction has been removed in 11g and in the next version of tapiGen conditional compilation will be used to determine the best method of performing these FOR ALLs.
  • Use of the supplied ERR package and the ERRORS table will become optional.

If anyone has any questions please feel free to ask. On another note, I'm beginning another open source project called plPDI. This will be a simple package designed to abstract out the use of the vCalendar/iCalendar and vCard standards. More details on that later!