Thursday, December 16, 2010

Modeler Custom Transformations

This is a follow up to the transformation I mentioned earlier where I mentioned how to write custom transformation in javascript for the design.  This is based on a java's pluggable script which means almost anything can be plugged in instead of javascript.  See here for details on the available languages.

This is the beginning of what I plan on building up over time which is a library of functions for use to make custom script very easily.  This is an example of adding the standard who columns to every table in the model.    

The first two functions are to delete and add columns to a table.

// import actual java classes for use later
// variable to keep a status message for later
var msg="";
  Delete function takes in the table and name of the column to delete

function deleteColumn(table,colName){ 
    columns = table.getElements();
 // iterate columns looking for the one to remove
    for (var i = 0; i < columns.length; i++) {
 if ( columns[i].getName().toUpperCase() == colName ){
   msg += "Deleted from "+ table.getName() + " : " + colName + "\n"; 
  checkOrCreate function takes in the table and name of the column to add. This checks for the existence of the column before addin

function checkOrCreate(table,colName,typeName,typeSize){ 
 hasCol = false;
 columns = table.getElements();
 for (var i = 0; i < columns.length; i++) {
 column = columns[i];
 if ( column.getName().toUpperCase() == colName ){
  // if the column is not present add it
  if (! hasCol ) {
      newCol = table.createColumn();
   // lookup the logical datatype based on the name
   type = model.getDesign().getLogicalDatatypeSet().getLogTypeByName(typeName);
      if (typeSize != null  ) {
      msg += "Added to "+ table.getName() + " : " + colName+ "\n";

// grab all the table in the model as an array
tables = model.getTableSet().toArray();

for (var t = 0; t < tables.length;t++){
 // remove all these from the table
 // add them back with the specified datatypes
 checkOrCreate(tables[t],"CREATED_BY",   "VARCHAR",200);
 checkOrCreate(tables[t],"CREATED_ON",   "DATE");
 checkOrCreate(tables[t],"UPDATED_BY",   "VARCHAR",200);
 checkOrCreate(tables[t],"UPDATED_ON",   "DATE");

// notify the user what happened
JOptionPane.showMessageDialog(null, msg);

The end result of running this will be a dialog that shows all the actions performed.

Friday, December 10, 2010

APEX Listener startup/shutdown script

David asked me for a way to keep the listener up and running when a box is rebooted.  There may very well be better ways to do this but this script can be used to start, stop, get status, and tail the log file.  Also it can be dropped in as an init.d script to have the Listener start when the box starts.  Since we don't have access to init.d for David's case, we are just going to put in a cron job to check status and start if not already going.  The start function checks to ensure it's not already running before trying to start it up.  You should be able to use this but just changing the location variables in the top of the file.

The one addition over a normal init.d is the log command I added ( which is very simple).  Just do ./ log and it will tail the log file.

. /etc/rc.d/init.d/functions
NAME="Oracle Application Express Listener"

OPTIONS="-Xmx1024m -Xms256m  -jar $APEXWAR"

start() {
        echo -n "Starting $NAME: "
        if [ -f $PIDFILE ]; then
                PID=`cat $PIDFILE`
                echo APEX Listener already running: $PID
                exit 2;
                nohup $JAVA $OPTIONS 2>&1 > $LOGFILE  &
                echo Started PID: $RETVAL
                echo $RETVAL >>$PIDFILE
                return $RETVAL


status() {
        echo -n "Status $NAME: "
        if [ -f $PIDFILE ]; then
                PID=`cat $PIDFILE`
                echo APEX Listener already running: $PID
                ps -ef | grep $PID
                echo APEX Listener not running

stop() {
        if [ -f $PIDFILE ]; then
                PID=`cat $PIDFILE`
                echo -n "Shutting down $NAME "
                kill $PID
                rm -f $PIDFILE
                echo APEX Listener not running
        return 0

log() {
        tail -f $LOGFILE

case "$1" in
        echo "Usage:  {start|stop|status|restart|log}"
        exit 1
exit $?

Thursday, November 11, 2010

Adding mail processing to your database app

   I'm not sure how many people know that part of the APEX team, Sharon Kennedy,  runs ProMedmail  which is probably one of the scariest and most active internet APEX applications.  It is "The global electronic reporting system for outbreaks of emerging infectious diseases & toxins, open to all sources."  So by a scary APEX app I mean you can get your latest updates on Anthrax in BangladeshDengue Fever , and about everything else going on in the world.

  The reason the "mail" part is in the name is that as you may have guess it's a mail list which anyone can subscribe to here.   It ends up on the website by some custom code that reads the an inbox and inserts that into a table in the database which is then exposed by APEX.  I've taken that custom code and made it quite generic.  I'm going to make this available on in a project named email2sql so  This is working and it's a really easy way to add email processing to any application.

To try this out there's just a couple steps.
1) Checkout the project
2) Edit the settings.xml for email and sql
2.5)  If you use the tables in the settings.xml already create the table with the mail.sql
3) Run "ant run"  ( if you need ant it's at )

In the settings.xml is 2 entries sql and attachment.sql, these control what to do and have all the binds available in the example.  This can be plsql instead of sql also.

<entry key="sql">
 insert into mail(id,to_address,subject,msg_body,rcvd_date,sent_date,msgid,from_address) 
<entry key="attachment.sql">
insert into mail_attachments(id,content,filename,content_type,content_size) 

Also for Promedmail we use hudson to schedule this to run every 5 minutes.  If you've not looked into hudson it's very simple to get started with also.  I did a talk on this back in Feb at RMOUG.  But more on that later.

Please let me know of any issues or suggestions.

Wednesday, October 27, 2010

Fine Tuning SQL Developer

Someone recently told me they didn't know you could turn off extensions you don't use in sqldev so I thought a quick blog may be in order.

Since SQL Developer and jDeveloper are based on the concept of extensions, they can do things like check for updates or install new extensions very easily.  The other side of that is that you can disable the extensions you don't use.  A couple example of things we include by default but you may not use are the modeler, migrations, and times ten support.  All you have to do is goto Tools->Preferences in the main menu and then uncheck the ones you don't want.  The result is some menus don't get installed, some code is now not running at startup and you have a slimmer IDE since you turned off the parts you don't use anyway.

Wednesday, October 20, 2010

Apex Listener EA and more

In case anyone missed it, the new APEX Listener 1.1 EA is out.  The big new feature in there is the Resource Templates.  If you are interested in this feature and have not found it yet, you should really start following Colm's blog.  You can learn about it from Colm: here and my post last week: here.  It will let you get data out via REST with less development than ever before.  The Developer's Guide goes into all the details of this new set of features.

Another thing in this EA is that the APEX Export and Splitter have been combined in the apex.jar which is included.  Once the apex.war is deployed the apex.jar is expanded out however if you want to use just this issue this to pull it out.  I'll clean this step up in upcoming releases but for now here's how to test it out.

jar xvf  apex.war WEB-INF/lib/apex.jar

Then you just issue the commands as normal.  The new part is that they are joined and in one command line you can export and split.

$> java -cp apex.jar:/path/to/ojdbc6.jar  oracle.dbtools.apex.utilities.APEXExport

Usage APEXExport -db  -user  -password  -applicationid  -workspaceid -instance -skipExportDate -expSavedReports -debug  
 -db               : Database connect url in JDBC format 
-user             : Database username
-password         : Database password
-useAliasFileName : use alias.sql instead of f12345.sql
-application      : ID or name for application to be exported
-workspace        : Workspace ID or Name for which all applications to be exported
-outDir           : Directory to export to        
-instance         : Export all applications
-skipExportDate   : Exclude export date from application export files
-expSavedReports  : Export all user saved interactive reports
-split            : Split the export files
-flat             : Split into a Flat file structure
-update           : < create update file >
-nochecksum       : < don't check for changes >

Application Example: 
   APEXExport -db -user scott -password tiger -application 31500 
Workspace  Example: 
   APEXExport -db -user scott -password tiger -workspace 9999 
Instance Example:
   APEXExport -db -user flows_020200 -password apex -instance 

The other things new here are from a request from John Scott. There's a flag to name the files based on the alias instead of just the id. The application name or workspace name can be passed instead of the id.

The splitter has a few options:

This is the main switch to turn on the splitting

Instead of directories it will use a single directory to put all the split up files.

This will create an update.sql based on what changed from the last split that was done. It uses a checksum on the new and old to determine if there was a change in that section

If you split f100.sql, you get all the split files. Normally, it will checksum that if page1.sql didn't change it will not update the timestamp on the file. If you turn checksum off it will always update the timestamp. The checksumming to update the timestamp is nice for things like checking into version control. Only the page/item/tab/.. sql scripts that changed will changed so you can see quickly what has and has not changed.

Tuesday, October 19, 2010

Four Stages of Learning

I went to a great set of classes this weekend and wanted to share one of the things I learned.  The classes had nothing at all to do with any form of technology, it was to get a hockey coaching license.  It was a very fun class since we got to meet Jamie Baker and Jay Woodcroft .  One slide was on the four stages of competence.  I don't think I've seen this before but it's on wikipedia and something that has been around for quite some time.   It breaks down the process into combination of 2 factors, Competence and Consciousness.

Taking those 2 factors there are 4 stages of learning.  Here's the process from the highest level to the lowest.

1) Competence and Unconscious
 - This is where you have become competent in the area you are learning to the level that you are unconsciously react to accomplish something.
 - How times do you do things on autopilot?  Your training has progress to the highest level and due to the number of repetitions it's been done it's just an automatic response now.

2) Competence and Conscious
 - You have the competence however are conscious of what you are doing.
 - How many times do you have to stop and think about what to do?  Then have a little thought it's easy using the skills you already have.

3) Incompetence and Conscious
 - You don't have the skills yet but are aware of what needed to accomplish the need.
 - How many times do you know how to get where you need to be yet have to breakout a manual/book/something to find the skills needed to get there?

4) Incompetence and Unconscious
 - You are just starting out so there's not a high skill level nor the awareness to know what needs to be done.
 - How often have you started a new project and not known where to go or how to start it?

Thursday, October 14, 2010

SQL Developer 3.0 and Encrypted PDF

Another new thing in the EA is unloading the grid of data to a pdf file.  Plain PDF is one thing but what this will show is how to encrypt it.  While in the database the data is normally quite secure and you have to have proper permissions and knowledge on how to get to the data.  However, when it's loaded into excel or PDF or ... the security is far harder to enforce.  Here's one way to address it with encrypted PDFs.

First go into the Tools-> Preferences and setup a default title/author/subject.  These are changeable later.

Now simply go down to the security section and check a couple boxes and enter a password.  I'm just showing the simplest case but you can see from the checkboxes that you can give more granular permissions also.

The easiest part is next , write your sql statement to get the data you need.  Once you have the data in a grid, it's as easy as right click -> Unload.

Once you have the dialog to unload, Choose PDF in the drop list of formats.  This will bring a section to change the defaults for report title,subject, and keywords.

Lastly, open the PDF and you'll get prompted for a password.  This screenshot is using Apple's Preview which just the default viewer for Apple and all PDF viewer will function the same.

Wednesday, October 13, 2010

SQL Developer 3.0 EA

Monday we released the SQL Developer Datamodeler 3.0 EA .  Today the SQL Developer 3.0 EA is out.  There's lots of new things in this version. Here's a quick index to some of the larger features that have been added. There are also too many smaller things to list which include things like a Table API generator.  Some of the features have online demos already more are on the way.  I will continue to blog on some of the demos of the others.

  1. DBMS Scheduler -- Online Demo here
  2. DBA Functionality  -- Online Demo here
  3. Files and Version Control
  4. Migrations
  5. PDF
  6. PL/SQL Support
  7. Query Builder -- Online Demo here
  8. Schema Browser
  9. SQL Plus Commands
  10. Spatial
  11. Tuning
  12. Unit Testing
  13. Unloading and Uploading  -- Data Export / Import was renamed to be more accurately load/unload

Monday, October 11, 2010

SQLDeveloper Datamodeler 3.0 EA

In case anyone missed it this morning, there's an Early Adopter build of the Modeler released today.  You can go and download it from the normal place on

There's been some great new features added to this release and Sue has some webcasts which range from  an overview to more detailed webcasts on version control and design rules. These webcasts are all under 15 minutes and are a great way to see what's new int he product.

Also, Here is the quick list of new features that have been added.  

Browser Navigator
The browser navigator supports the ability to have multiple open designs.
Existing physical models automatically appear in the browser.
Custom Design Rules
Create user-defined Design Rules.
Group collections of design rules in Design Rule sets.
Build libraries of Design Rules.
Create used defined transformations.
Settings are available at the diagram level
- Show labels (for relationships)
- Add diagram legend
- Straighten all lines
- Show grid
- Resize objects to visible size. (This is useful if you change the View Details settings)
ImportImporting CA Erwin Data Modeler files has been extended to include the import of Erwin 7.x files.
Import from Erwin supports the import of Functions, Packages and Stored Procedures to the Physical Model.
Data Dictionary Import supports the import of Functions and Packages.
Importing for Oracle Designer: You can select the checked in branch.
Incremental import of used domains in Designer
Physical Model
Support for the import and DDL generation of packages and functions.
Changed "snapshot" support to "materialized view" support.
Full support for Oracle 11g partitioning (table and materialized view).
Reporting.Reporting provided through either:
- Reporting Repository. Users export the design to the reporting repository and use SQL Developer to run reports.
- Local Reports: Run reports from the tool itself. Reports are word xml documents and as such can be opened in MS Word. These reports include entities tables, domains and glossary.
Reporting RepositorySupport extended to include data types models.
Subversion Integration
Integrated version control using Subversion for complete design.
Allows you to compare and merge objects.
Designs added to versioned folders are placed under version control.
Users can see pending incocoming and outgoing changes.
Data Modeler recognizes versioned designs.
Changes made to a versioned design and saved, are revealed in the pending changes dialog.
Collaborative access and support is provided through tight integration with Subversion.

Saturday, October 09, 2010

3.0 Modeler Sneak Peak - Transformations

The newly free modeler has some great new features coming as well.  This is an example of the new scriptable transformations.  This is a fairly simple example but I think it shows of how powerful this form of transforming the models can be.

First, we start by importing the HR schema to have something to work with.  

 To invoke a Transformation or create a new one, simply choose under Design Rules.

I'm going to use the first one in the list which changes all table names to lowercase.  The part of this I like is that it's using Java's scripting engine support which allows you to plugin any engine you like programming in.  The default is javascript which is what this example is.

The end result is as expected, the table names are lower case now.

3.0 Sneak Peek #2 - Schema Browser

I'm going to try and do some short blog entries that are showcasing some of the up and coming features of 3.0.  We showed lots of people at Open World but for those of you that couldn't be there, stay tuned and I'll put a few things out.

Another new thing being added is an alternative view of the objects in the schema. Simply right click on the connections and choose Schema Browser.

Then you are presented with a drop list of the schemas and object type.

Then as with the tree based navigator, there is filtering.  This view has 2 types of filtering.  The first is when you click the down arrow as shown in the screen shot.  This takes you to the normal filter dialog and filters by adding a predicate to the sql retrieving a list of objects.  The second is the binoculars at the bottom of the list.  This does a very fast/simple client filter of the objects listed. As seen below with "jo" in the search.

Friday, October 08, 2010

REST/JSON access to your data in 2 clicks

There never seems to be enough time at oow to show all the new things coming.  Some new things in the listener is exactly the case this year.  A few people found me and asked some questions but we didn't have a good setup for demoing it there.  Here's one of the new things coming....

Here's how to get REST / JSON access to your data in 2 mouse clicks.

1. Click Add Resource Template

2. Fill in the fields. Second Click Save.
   You can tell that the {id} is the bind that is passed into the query.  Where it's referenced with the same syntax to use as a bind.

Done.  Now here's the JSON output from all that hard work :)

Obviously this is a very simple example and there are more samples and documentation on the possibilities coming.

Friday, September 24, 2010

Oracle Dev Days vm tip

We passed out a few thousand of these DVDs at OpenWorld this week.  I thought I'd pass on a little tip I use to easier connect from the host to the database inside the guest.

I setup my host's port 1521 to forward into the vm to simplify the connect.  Here's the command to issue on the host in my case my mac.

Kris-Rices-MacBook-Pro:~ klrice$ VBoxManage modifyvm "Oracle Developer Days"  --natpf1 "tns,tcp,,1521,,1521"

Now anything going to 1521 on the host will be forwarded into the guest.  Here's sqlplus working on the host machine.

Kris-Rices-MacBook-Pro:~ klrice$ sqlplus system/oracle
SQL*Plus: Release - Production on Fri Sep 24 11:29:03 2010
Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select 1 from dual;

Thursday, August 19, 2010

Getting Running with APEX Listener 1.0.1

There's a new feature in the latest listener posted a few minutes ago that makes it easier than ever to get up and running thanks to Sun.   There is now a built in webserver thanks to Glassfish.  That means for desktop or testing there's no need to download any webserver. However, the same war file is still deployable as before into your webserver of choice.

Here's the 5 steps to get a fully functional web server running against APEX.

1) Download Here :
2) unzip
3) type: java –jar apex.war
4) Then just follow command line prompts.
      - You'll be asked for the images location , admin and manager for the listener
5) When the browser pops open fill in the database connect information


Here's a couple of the command line options.

-Dapex.home=/path/to/apex : Path to the folder used to store the web container runtime,
                                             defaults to: ${user.home}/.apex/nnnn,  where nnnn is the HTTP listen port
-Dapex.port=nnnn : HTTP listen port,
                               defaults to 8080
-Dapex.images=/images/location : Path to the folder containing static images

In the doc see the "Using the Embedded Web Container" for more details

For the curious, what we did was embed Grizzly from the glassfish project , .  This is a project designed for this use.  Embedding a webserver and servlet container for specialized purpose.

The other big addition is the java docs for the extension points.  More on that coming.....

Friday, July 16, 2010

3.0 Sneak Peek #1

 Ever tinker with SQL to get the best plan possible but have to track what changed by flipping screens or not sure what session stats change due to the new plan ?  Here's the feature for you.  It highlights what changed in the tree as well as shows all the session stats that change from one plan to the next.  As with all things this is work-in-progress and subject to change.

This is an example on a partitioned table I use to test with of the difference between using a date>sysdate-10 and between sysdate-10 and sysdate.  The differences can be see very quickly.

APEX Listener undocumented feature #1

Seems I always forget to doc everything that I code up.  If you have the apex listener, try this out.

The listener will convert

This hopefully is a baby step to nicer urls to hand out to end users.


Friday, April 30, 2010

Kicking a few tires

In case you didn't know, we've been doing some FREE hands on training in a few places.  The next one is at Disneyland . If your local, this is a great way to meet some folks from product development.  At the event is developers and product managers from Times Ten, XDM , Java, SQL Developer, SQL Developer Data Modeler, Application Express, JDeveloper, .NET, and I'm sure I'm missing someone.  The training is based on the Oracle By Examples which have a new home on here. You can try any of these OBEs online anytime.  To make things even easier we've made the first VirtualBox Appliance with everything pre-setup. Just download and run.

If you want to try out any of these technologies, it's easier than ever. Follow the 5 steps listed on the appliance page and you have a complete setup for running the Application Express and Database Development tracks.  This includes:
  • Oracle Enterprise Linux 5
  • Oracle Database 11g Release 2 Enterprise Edition
  • Oracle TimesTen In-Memory Database Cache
  • Oracle XML DB
  • Oracle SQL Developer
  • Oracle SQL Developer Data Modeler
  • Oracle Application Express
  • Hands-On-Labs (accessed via the Toolbar Menu in Firefox)
 The appliance is also running the latest Application Express Listener to try out.

Tuesday, April 20, 2010

Yet Another APEX Listener EA - Better looking Admin

Another Listener EA was just posted. There's some fixes as well as some changes.  The biggest changes are the administration.

The biggest new thing is that the listner has 3 configuration/status URLs now.
1)  http://localhost/apex/listenerConfigure

    This is used for the first time setup only.  Once the listener is configured this URL it no longer accessible. Also there's a drop box to choose other connection choices like TNS and Advanced.  The on the Miscellaneous tab you can point to where your tns file is located.
2)  http://localhost/apex/listenerStatus

    This is used to view the status of the listener.  This includes how many connections are in the pool. How the cache is doing. What the response times are. Also what errors there are in the execution of the PL/SQL.

3)  http://localhost/apex/listenerAdmin

    This is the biggest change to what there was.  With the exception of the connection info, everything else is changable at runtime.  So if you want to try out some of the settings like security,cache,pre,port, or the jdbc pool itself you can change it without a restart.

Now to set these up there's security that needs to be enabled in the webserver which you have choosen.  We're working out a howto for all the webservers we'll support.  For now here's the tomcat instructions:

1) Edit tomcat-users.xml
2) Add these ( with more secure passwords! )
  <role rolename="Manager"/>
  <role rolename="Admin"/>
  <user username="apexStatus" password="status" roles="Manager"/>
  <user username="apexAdmin" password="admin" roles="Admin"/>

SQL Developer Webast tomorrow

There is a public webcast tomorrow with development showing the new things in SQL Developer 2.1. If you are interested and can't make a conference, this is a great way to catch up on what's new.  Normally, Sue would be doing these webcast however she was attending the Norway OUG last week and is now trying to get home.  This mean I'll be doing the one this time.  Here's an outline of what I'll be showing.

  • Introducing the new webcast series
  • Oracle SQL Developer product update
  • Product Overview Demonstration, focusing on SQl Developer 2.1 New Features.
  • Unit Testing
  • New Worksheet
  • Version Control
  • Navigation Filtering
  • and a few other things
  • Q&A
Date and Time:
21 April 2010
09:00 PST | 12:00 EST | 17:00 UCT | 18:00 CET

Dialin and web conference details are here :

Monday, March 22, 2010

SQL Developer and Java memory use

First, this feature in java is not yet production. So if something goes wrong after setting this, back out the changes.

SQL Developer is a java application in case you didn't know :)  So, we do not manage the memory directly nor do we have the ability to release back to the operating system.  This is taken care of by java which does the garbage collection for us.  There's many things you can read about tuning the garbage collector (GC)  to get it to collect more efficiently.  There is a new GC being built and it's in 1.6 u14 +.  I've been testing it and unlike the older GC it releases memory back to the operating system.  I did a select and fetched a few 1000s rows which were brought into memory. Then I closed the worksheet tab and the Windows task manger showed the memory was released back and sql developer dropped in memory usage. This could help you and I'm told this will be production in a future update of the 1.6 jdk.  In the meantime, here's how to enable it.

edit the sqldeveloper.conf file and add these lines:

AddVMOption -XX:+UnlockExperimentalVMOptions
AddVMOption -XX:+UseG1GC
AddVMOption -XX:+G1ParallelRSetUpdatingEnabled
AddVMOption -XX:+G1ParallelRSetScanningEnabled

If you care to read more about the new GC, check this link out:

Lastly, remember non-production.

Thursday, February 25, 2010

Another APEX Listener EA - More knobs to turn

There's a new listener EA released today go grab it and try out.  There's 2 real changes.  First the MIME_TYPE on file uploads is fixed. 

 The second is a result of sitting down at RMOUG last week and talking over the excel parsing with Scott.   That conversation resulted in a few new knobs to change some configuration options. 

1) I found an API which takes the item_id and gives back the item_name so the collection will default to the ITEM_NAME of the file upload item.

2) Files ending with .xls OR .xlsx will be processed.

3) Several new configuration choices:
  a) First is a new config option to have 1 sheet of the workbook per collection.  This is controlled by
<entry key="apex.excel2collection.onecollection">true|false</entry>

   b) Then, you can essentially hard code a collection name to be used for all excel uploads.
<entry key="">MY_COLLECTION_NAME</entry>

   c)  Lastly, an option to use the sheet name vs. a sequential WORKSHEET:1,2,3....
<entry key="apex.excel2collection.useSheetName">true|false</entry>

So, here's a recap of excel parsing with the APEX Listener

Step 1: Turn it on
    <entry key="apex.excel2collection">true</entry>
Step 2:
    The request of the upload must be : xls2collection

This will get you ONE collection which contains all sheets in the workbook and the collection will be named the APEX Item name.

To have ONE collection per sheet:

     <entry key="apex.excel2collection.onecollection">false</entry>

To have a sequential sheet name vs. the sheet name from excel.
    <entry key="apex.excel2collection.useSheetName">false</entry>

To specify the name of the collection for ALL collections created via Excel upload

    <entry key="">MY_COLLECTION_NAME</entry>

The results of all of these upload are a collection which is formed as this:
The default of just turning it on will look like this.

CollectionName - Item Name
C001                 - Sheet Name
C002..N            - Data from the worksheet

Thursday, February 11, 2010

Yet another APEX Listener EA

We just pushed the next EA for the Application Express Listener. There's a few base bugs fixes in this like the file upload issue people on the forums posted about. The other significant bug fixed is when there's multiple listeners in one webserver. The listener will now check the following location in this order for a config file.

1)  $HOME/[MOUNT POINT]/apex-config.xml
2)  ${config.dir}/[MOUNT POINT]/apex-config.xml ( from web.xml )
3)  ${}/[MOUNT POINT]/apex-config.xml  (default for new installs )
4)  $HOME/apex-config.xml
5)  ${config.dir}/apex-config.xml ( from web.xml )
6)  ${}/APEX/apex-config.xml  (default for new installs )

Now for some new things.

1)    A new /listenerAdmin which gives realtime stats.  It's a long ways from the end look of the page but it give stats.  This is protected by a role named "listenerAdmin".  Users and roles are setup in the webserver. To setup this role in tomcat, edit the conf/tomcat-users.xml and add something like this:

<role rolename="listenerAdmin"></role></pre>
<user password="klrice" roles="listenerAdmin" username="klrice"></user>

Here's an example from the site:

Current Time: Thu Feb 11 20:08:40 EST 2010
Server uptime: 50 Days 07:16:23.966
Total accesses: 1449806.0
Total Traffic: 21.85G
Total File Downloads: 86029
Total Errors: 3678
Total Processing Time: 10 Days 21:07:36.838

2) There's now a mechanism to log the full db requests.  This is a step towards having the requests repayable in the Unit Tester.
To begin with this here's the 2 lines needed in the apex-config.xml

 <entry key="apex.log.enable">false</entry>
 <entry key="apex.log.procedure">log_proc</entry>

A sample of the required signature is in the WEB-XML/sample_capture.sql

3) The listener can now take a *.xls file that is uploaded and convert it into a collection. Here's the steps to try this out.
a)  This needs to be turned on in the apex-config.xml
<entry key="apex.excel2collection">true</entry>

b) The file upload page must have a request of xls2collection.

You should get a collection of the name of the item and the C001 is the name of the sheet in the xls file.  Here's an example.

Tuesday, January 26, 2010

Hockey without Pads?

Everything seems fine when there is no problems.

My son at an exhibition game.
Then something goes wrong.

or why would anyone code without Source Control?

Why would someone do either? We've all done the second.  I'm going to guess not all of us have done the first but it physically hurts more.  We know better than to code without backups or without checking things into some system but I get a lot of people that ask how to get started. Well,...

I have a talk at RMOUG in a couple weeks. This talk is going to be focused how to setup a source control system then add continuous integration.  We will use SQL Developer to check-in/out/patch the code.  We're going to start with just a bunch of files in a filesystem.  We are going to end up with a source code control and an integrated continuous integration running to build a stage system automatically.   based on when code is checked in and emails when builds fail!

We'll be using Oracle 11G Release 2, Oracle SQL Developer, Subversion, Hudson and Perl for a small script.

If your not coming to the show but are interested, I'll be writing up in more details here after the show. If there's something that you think should be addressed, let me know.

Friday, January 22, 2010

OTN Developer Days/ Hands on Labs

   Last week we hosted a free event for developers in New York.  The main purpose of this event was to help introduce people to things they may not use day to day or know of.  The full agenda is here.  The best part of this for people is that development staff is on hand to answer questions. This is also the first time I ever gave a keynote which the slides can be seen here.  We will be bringing this to other cites and countries.

  Even if you can't attend an event the entire development community will benefit because we're working out how to host virtual machines onto OTN that are setup for the labs that are in the event.  The labs will be contained in the VM. This means to do a lab it'll be something like 1)download 2)start VM 3) start lab.  Nice and simple.The VM for this event had:
  • Oracle Database 11G Release 2
  • Oracle Times Ten 11G Release 2
  • Oracle SQL Developer 2.1
  • Oracle Application Express 3.2.1
  • Oracle SQL Developer Datamodeler 2.0

The nicest thing about this VM approach is zero installs to get tinkering instantly.  I'm in the process of updating a few things in the VM and when that's completed we should be getting them onto OTN.