Tuesday, August 20, 2013

Chunked File loading with APEX Listener + HTML5

  I just found the HTML5 File API the other day so I had to see what I could do with the APEX Listener's RESTful services. There's a bunch of blogs on what can be done such as on HTML5rocks  .

  The end result is that the new File api let's javascript get details of the file and slice it up into parts. Then I made a pretty simple REST end point to receive the chunks and put them back together again.

The actual sending part of the javascript is here
function sendChunk(chunkNumber){
        var reader = new FileReader();
        var start = chunkSize * (chunkNumber-1);
        var end = start + chunkSize -1;
      // create the slice of the file
        var fileContent = selectedFile.slice(start, end);   
      // grab the length
        var length = fileContent.size;
      // read the slice of the file
        url: uri,
        type: "POST",
        data: fileContent,
        processData: false,
        beforeSend: function(xhr) {
          // pass in the chunk size,offset,name 
          // as headers
                        xhr.setRequestHeader('x-chunknumber', chunkNumber);
                        xhr.setRequestHeader('x-filename', selectedFile.name);
                        xhr.setRequestHeader('x-offset', start );
                        xhr.setRequestHeader('x-chunksize', length );
                        xhr.setRequestHeader('x-content-type', selectedFile.type );

        success: function (data, status) {
          bytesUploaded += length;
          // set the percent complete 
          var percentComplete = ((bytesUploaded / selectedFile.size) * 100).toFixed(2);          
          $("#fileUploadProgress").text(percentComplete + " %");
          // make a link to the REST that can deliver the file
          $("#downloadLink").html("New File");
          // if there's more chunks send them over
          if ( chunkNumber < chunks ) {
        error: function(xhr, desc, err) {

The next step is to make the HTTP Headers into bind variable so the plsql block will be able to use them.

  p_b         blob;
  p_body      blob;
  p_offset    number;
  p_filename  varchar2(4000);
  p_raw       long raw;
  p_chunksize varchar2(200);
  p_status    varchar2(200);
  -- pull the binds into locals
  p_offset    := :OFFSET + 1;
  p_body      := :body;
  p_filename  := :filename;
  p_chunksize := :chunksize;

  -- If there is a file already with this name nuke it since this is chunk number one.
  if ( :chunkNumber = 1 ) then
    p_status := 'DELETING';
     delete from chunked_upload
      where filename = p_filename;
  end if;

  -- grab the blob storing the first chunks
  select blob_data
         into p_b
    from chunked_upload
   where filename = p_filename
    for update of blob_data;

  p_status :=' WRITING';

  -- append it
  dbms_lob.append(p_b, p_body);

  -- if no blob found above do the first insert
  when no_data_found then
     p_status :=' INSERTING'; 
     insert into CHUNKED_UPLOAD(filename,blob_data,offset,content_type)
      values ( p_filename,p_body,p_offset,:contenttype);  
  when others then
      -- when something blows out print the error message to the client

A very simple html page for testing it all out.

Here's a quick video of how it all works.

Here's the complete Javascript/html for this sample.

JS Bin

Friday, August 09, 2013

Data Modeler 4.0 EA1

  SQL Developer Datamodeler 4.0 EA1 is here.  There's lots of new things in there including support for many of the 12c features such as Identity data types, data masking , longer varchar2, and more.  There's still many other new features and tweaks for the non-bleeding edge 12 installs.  See the full list here.

 As always go download it from OTN, and let us know on the forums what you like or find !

Thursday, August 08, 2013

Tailing work in the db as it run with pipes

  Doesn't seem that long ago, I needed to monitor a process as it ran.  So, I wrote a small wrapper over dbms_pipe to look like dbms_output so it was easy to switch back and forth.  That was in 2004 so the file's timestamp says.  Time flies !

  DBMS_PIPE is nothing new and many have blogged it's good and bad.  There's lots more but here's just a few:




  Here's the wrapper I made and the body is here:

   pv_pipe_on_bln BOOLEAN := false;
   PROCEDURE set_pipeoutput_on;

   PROCEDURE put_line (p_message_txt VARCHAR2);
   PROCEDURE put_line (p_pipe_name VARCHAR2,p_message_txt VARCHAR2);
   PROCEDURE get_line (p_waittime_num NUMBER := 1);
   PROCEDURE get_line (p_pipe_name VARCHAR2,p_waittime_num NUMBER := 1);
   PROCEDURE get_line (p_pipe_name IN VARCHAR2,p_waittime_num IN NUMBER := 1,p_message OUT VARCHAR2);
END pipe_output;

So what that would allow me to do in the plsql block was something like this.

  for r in ( select object_name from user_objects ) loop
  end loop;

Now that something is in the pipe it's time to get it out. For that I wrote a small java program.  The same thing can be done direct from a sqlplus script or perl or whatever, it's not complicated.

import java.math.BigDecimal;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.Types;

public class TailDBMSPipe extends Thread {
    private String v_pipe_name;
    private String connName;

    public TailDBMSPipe(String pipeName) {
        v_pipe_name = pipeName;

    public static void main(String[] args) {
        TailDBMSPipe p = new TailDBMSPipe(args[0]);
        p.connName = "jdbc:oracle:thin:" + args[1];
        //p.connName = "jdbc:oracle:thin:my_customers/oracle@ap6002wgs.us.oracle.com:1521:mcdev";

    public void run() {
        System.out.println("Redirecting database pipe:" + v_pipe_name);
        Connection conn = null;

        try {
            conn = DriverManager.getConnection(connName);

            String s = null;
            BigDecimal retVal;
            BigDecimal bigZero = new BigDecimal(0);

            while (true) {
                CallableStatement cs = conn.prepareCall("{ call ? := DBMS_PIPE.RECEIVE_MESSAGE(?, ?) }");
                cs.registerOutParameter(1, Types.INTEGER);
                cs.setString(2, v_pipe_name);
                cs.setString(3, "100");
                retVal = cs.getBigDecimal(1);

                if (retVal.compareTo(bigZero) == 0) {
                    cs = conn.prepareCall("{ call DBMS_PIPE.UNPACK_MESSAGE( ?) }");
                    cs.registerOutParameter(1, Types.VARCHAR);
                    s = cs.getString(1);

                    if (s != null) {
                        System.out.println(v_pipe_name + ":" + s);
        } catch (Exception e) {
            try {
            } catch (Exception ex) {
                System.out.println("could not close");

The results is that you can watch what comes across the pipe very fast.  You may notice the timestamps in the output of what's in the demo here:

Thursday, August 01, 2013

SQL Developer meets REST

  SQL Developer is bringing the ability to define all the APEX Listener's REST abilities.  There's many advantages to adding this in.  The first and probably most important to me who typos and doesn't see it, is the ability to test the query or plsql before publishing it and getting a 500 error and not knowing why.
  This screenshot shows many other advantages of adding in.  The REST definition screens will have a fully functioning SQL Worksheet to test and tune the source of the REST call.  That means code insight, explain ,autotrace, sql tuning advisor, grids of data, and anything else.  

  The differences over a normal worksheet are obviously the name of the tab is the handler method and the name of the REST end point.  Then there's the additional tabs for defining in and out parameters.

  Then the other thing I frequently don't get correct is testing the URL out.  I forget if I added a slash or what the module was or typo or something to get the URL wrong.  The new details tab shows what the URL is going to be.

A quick video of making a new REST call in SQL Developer: