Tuesday, December 15, 2015

Carbonated SQLCL - aka SODA support

     Oracle Database 12.1.0.2 introduced JSON support.  This is supported in REST Data Services already and now in the latest SQLCL.  This means ORDS can serve up the access to client applications and developers can have a command line way to interact and test features/queries/... in a nut shell a shell for JSON document storage. To use this feature the database will be to have patch #20885778 applied.

KLRICE@orcl >soda

SODA
------

SODA allows schemaless application development using the JSON data model.
  SODA create  
       Create a new collection 

  SODA list 
       List all the collections 

  SODA get <collection_name> [-all | -f | -k | -klist] [{<key> | <k1> <k2> ... > | <qbe>}]
       List documents the collection 
       Optional arguments: 
      -all    list the keys of all docs in the collection 
      -k      list docs matching the specific  
      -klist  list docs matching the list of keys 
      -f      list docs matching the  

  SODA insert <collection_name> <json_str filename=""> 
       Insert a new document within a collection 

  SODA drop <collection_name> 
       Delete existing collection 

  SODA count <collection_name> [<qbe>] 
       Count # of docs inside collection. 
       Optional <qbe> returns # of matching docs

  SODA replace <collection_name> <oldkey> <new_ doc="" str=""> 
       Replace one doc for another 

  SODA remove <collection_name> [-k | -klist |-f] {<key> | <k1> <k2> ... | <qbe>} 
       Remove doc(s) from collection 
       Optional arguments: 
      -k     remove doc in collection matching the specific <key> 
      -klist remove doc in collection matching the list <key1> <key2> ... > 
      -f     remove doc in collection matching <qbe> 

KLRICE@orcl >

There's a lot in there for command so give them a try.  SODA is underpinned but tables with JSON data in them.  It's very easy to get going testing out the SODA features.  Here's a simple collection create, insert, and query.


KLRICE@orcl >soda create ODTUG
Successfully created collection: ODTUG

KLRICE@orcl >soda insert ODTUG {"content":"awesome"}
Json String inserted successfully.

KLRICE@orcl >soda insert ODTUG {"location":"chicago"}
Json String inserted successfully.

KLRICE@orcl >soda insert ODTUG {"best_of_week":"DB Symposium"}
Json String inserted successfully.

KLRICE@orcl >soda get ODTUG -f {"content":{"$startsWith":"a"}}

Key:      7A01B31EBD7C484C88F431B8871DD165
Content:  {"content":"awesome"}
-----------------------------------------
 1 row selected. 



JavaScript and SODA

Here's a quick example of using the SODA libraries from javascript.  Also in here is a new feature of scripting.  The javascript can be inlined into a sql script.  Notice that it's soda.sql <<< .sql that has javascript in it, let that sink in for a minute with the possibilities.

KLRICE@orcl >@soda.sql

Caffeinated SODA 
Dropping the old
Created the old
Inserted Document
Searching...
CBFE3D919E814094954BB217A2F6916E
{ "name" : "Alex", "friends" : "50" }


KLRICE@orcl >!cat soda.sql

script
ctx.write('Caffeinated SODA \n');


var OracleRDBMSClient = Java.type("oracle.soda.rdbms.OracleRDBMSClient");
var OracleDataSource = Java.type('oracle.jdbc.pool.OracleDataSource');
var cl = new OracleRDBMSClient();
var db = cl.getDatabase(conn);

ctx.write("Dropping the old\n")
var foo = db.admin().createCollection("foo");
 foo.admin().drop();
ctx.write("Created the old\n")
    foo = db.admin().createCollection("foo"); 


var doc = db.createDocumentFromString("{ \"name\" : \"Alex\", \"friends\" : \"50\" }");
ctx.write("Inserted Document\n")
 foo.insertAndGet(doc);

conn.commit();

var f = db.createDocumentFromString(JSON.stringify( {"name" : { "$startsWith" : "A" }}));
ctx.write("Searching...\n")
var c = foo.find().filter(f).getCursor();


while (c.hasNext()) {
 cDoc = c.next();
 ctx.write(cDoc.getKey() + "\n");
 ctx.write(cDoc.getContentAsString());
 ctx.write("\n\n"); 
}

ctx.write("\n\n");
/

KLRICE@orcl>


Gory Details of the release

Someone asked about change logs for these SQLCL updates.  Here's a filtered version of the gory details



Thursday, October 15, 2015

SQLCL - Blob loading ALL the files


The github repo for all the things the DB Tools team does will be posted here https://github.com/oracle/Oracle_DB_Tools This will include sqlcl , sqldev, modeler, ords , and who knows what other things we have up our sleeves to post.  This repo is going to be made up of  examples and getting started things like a new extention for sqldev, custom transformation for modeler, ords integration with handsontable, or sqlcl scripting.


Now onto more BLOB loading.

I'm slowly building out a helper.js which is now posted and I'll continue to update it on github here: https://github.com/oracle/Oracle_DB_Tools/tree/master/sqlcl/lib

There's a new function in there to exec host commands and capture the stdout, stderr, and return code of the command.  Now coupling that with the BLOB loading and you can load an entire directory of files into a table with a small amount of javascript.




New helper function

/*
*  Runs the passed in command and returns an Object with
*   .rc      - the return code
*   .stdout  - STDOUT
*   .stderr  - STDERR
*
*/
helpers.exec=function(cmd){
    var RunTime  = Java.type("java.lang.Runtime");
    var Scanner  = Java.type("java.util.Scanner");
    var p = RunTime.getRuntime().exec(cmd);

    var ret={};
        s = new Scanner(p.getInputStream()).useDelimiter("\\A");
        ret.stdout = s.hasNext() ? s.next().trim() : "";
        s = new Scanner(p.getErrorStream()).useDelimiter("\\A");
        ret.stderr = s.hasNext() ? s.next().trim() : "";

        p.waitFor();
        ret.rc = p.exitValue();
      p.destroy();
    return ret;
}


Putting it together with the BLOB loading example

load('https://raw.githubusercontent.com/oracle/Oracle_DB_Tools/master/sqlcl/lib/helpers.js');

/* Clean out the table just to prove it's not a baked demo */
sqlcl.setStmt("truncate table k");
sqlcl.run();

/* File name */
var files= helpers.exec('find . -maxdepth 1 -type f ').stdout.split('\n');

/* bind map for reuse */
var binds = helpers.getBindMap();

for(f in files ) {
   //  ctx.write("Loading : " + files[f] + "\n");
  /* load the blob */
  blob = helpers.getBlobFromFile(files[f]);

  /* assign the binds */
  binds.put("path",files[f]);
  binds.put("b",blob);

 /* Just do it */
  var ret = util.execute("insert into k(path,blob_content,when) values(:path , :b, sysdate)",binds);
}

/* print the results to see if it worked */
sqlcl.setStmt("select path,dbms_lob.getlength(blob_content) from k order by when desc;");
sqlcl.run();



Removing the truncate and select at the end, loading the entire directory of client side files is 7 lines of javascript.



Wednesday, October 14, 2015

AskTom's print_table - SQLCL version

There was an idea logged over on community.oracle.com to add AskTom style print_table to sqlcl.

Here's how easy it is now that there's scripting in sqlcl.  This is a simple js file that loops over the rows and columns and prints them.  When using the executeReturnListofList the first row is the name of the columns for reference.  With that data there it's quite simple to customize any output.



/* rebuild the sql passed in since args is split on spaces */

var sql="";
for(var i=1;i<args.length;i++){
  sql = sql + " " + args[i];
}
ctx.write(sql + "\n\n");


var ret = util.executeReturnListofList(sql,null);

/* loop the rows */
for (var i = 0; i < ret.size(); i++) {
    ctx.write('>ROW \n'); 
    /*loop the cols */
    for( var ii=0;ii<ret[i].size();ii++) {
        ctx.write("\t" + ret[0][ii] + " : " + ret[i][ii] + "\n");
    } 
}

ctx.write('\n\n');




Now running it is very simple.  Script printtab select......



SQLCL - Blob loading from a file

There is a ton of ways having new scripting in sqlcl can help.  One example is a simpler way to load a local file into a BLOB in a column.

Prior to this the only options I know of would be to write custom code or move the files to the server and use utl_file or xdb or some combination of things like this.

Enter SQLCL and javascript method #1.


Here's a simple script which loads a local file into a temp BLOB and uses that as a bind variable in the insert statement

/*
/*
*  Function to take in a filename and add or create it to a map
*  with bind variables
*/
function addBindToMap(map,bindName,fileName){
     /*  conn is the actual JDBC connection */
     var b = conn.createBlob();

     var out = b.setBinaryStream(1);

     var path = java.nio.file.FileSystems.getDefault().getPath(fileName);

     /* slurp the file over to the blob */
     java.nio.file.Files.copy(path, out);
     out.flush();

     if ( map == null ) {
         /* java objects as binds needs a hashmap */
         var HashMap = Java.type("java.util.HashMap");
         map = new HashMap();
     }
     /* put the bind into the map */
     map.put("b",b);
 return map;
}


/* File name */
var file = "/Users/klrice/workspace/raptor_common/10_5.log";

/* load binds */
binds = addBindToMap(null,"b",file);

/* add more binds */
binds.put("path",file);

/* exec the insert and pass binds */
var ret = util.execute("insert into k(path,blob_content,when) values(:path , :b, sysdate)",binds);

/* print the results */
sqlcl.setStmt("select path,dbms_lob.getlength(blob_content) from k order by when desc;");

sqlcl.run();



Running it is as expected.  One new row and showing the blob size.  The only difference here is that you can avoid typing s-c-r-i-p-t and save 2 letters with just scri.




Method #2 shared library

I'll be making a library intended to make some things easier such as file loading and creating the hashmap.  So taking out this blob loading as an example there first 2 things to put into such a library in helpers.js


/*
    This is a javascript library to make some of the common things
    in SQLCL simplier.
*/

var helpers = {} ;

/* for complex type a java hashmap is needed for binds */
helpers.getBindMap = function(){
   var HashMap = Java.type("java.util.HashMap");
   map = new HashMap();
   return map;
};

/* create a temp blob and load it from a local to sqlcl file */
helpers.getBlobFromFile=function (fileName){
     var b = conn.createBlob();
     var out = b.setBinaryStream(1);
     var path = java.nio.file.FileSystems.getDefault().getPath(fileName);
     java.nio.file.Files.copy(path, out);
     out.flush();
 return b;
};


Now the blob loading script can be reduced to something like this

load('helpers.js');

/* File name */
var file = "/Users/klrice/workspace/raptor_common/10_5.log";

/* load binds */
binds = helpers.getBindMap();

/* add more binds */
binds.put("path",file);

blob = helpers.getBlobFromFile(file);

binds.put("b",blob);


/* exec the insert and pass binds */
var ret = util.execute("insert into k(path,blob_content,when) values(:path , :b, sysdate)",binds);




I'll be cleaning up this helpers.js and hosting it soon with frequent additions to new functions.

Tuesday, October 13, 2015

SQLcl - Oct 13th Edition

Another build of SQLCL is out.  This one has lots of great things in it.  There's fix in there to work with Cygwin for the unlucky people that are still on windows.

Now the new stuff:  SCRIPT

This new command can radically change things you can do in the tool.  The script command runs a script that can be any language that is JSR-223 enabled.  The best list I can find is here: https://en.wikipedia.org/wiki/List_of_JVM_languages  So you want to program sql scripting in groovy,perl,javascript,pascal,....  All doable now.

Javascript is the only one built into Java by default so most example will be focused on that although I do have some Jython ones also to post.

To run a script simple type: script <script name>  If there is no file extension passed along it assume a .js  The language is loaded based on this extension.  If the language can't be loaded you get a message saying as much "Could not load Scripting Engine for :py"  More on how to add the other languages later.

Here's a simple script:

ctx.write("Hold my beer, I'm going to try something\n\n");

ctx.write('hi from javascript\n');

ctx.write('This is a count of my objects');
sqlcl.setStmt('select count(1) from user_objects; ');
sqlcl.run();

var ps1= 'set sqlprompt "@|blue  _USER|@@@|green _CONNECT_IDENTIFIER|@@|blue 🍼 >|@"'

ctx.write('Now javascript will change the sqlprompt\n');
sqlcl.setStmt(ps1);
sqlcl.run();

sqlcl.setStmt('@numbers.sql');

sqlcl.run();


This example while simple show you can run commands from inside the script that affects sqlcl itself like changing the prompt.  The simple example is that I connected to production so a baby bottle it swapped out for beer




There's a few globals available for use.  There'll be more formal doc to follow

    sqlcl
        Methods:  
             setStmt(<String of stuff to run>)
                 This can be a single statement, an entire script of stuff, or any sqlcl command such as "@numbers.sql"
             run()
                 Runs whatever is in the stmts

   ctx ( this has tons of methods but this is the single most important )
       Methods:
             write(<String>)

   util ( again tons of methods ) 
       Methods:
              execute(<string>,binds)
                   executes whatever is passed in with a boolean return for success/failure
              executeReturnOneCol(<string>,binds)
                   executes and returns the first row , first column
              executeReturnListofList(<string>,binds)
                   executes and returns an array(rows) of arrays(row).  
              executeReturnList(<string>,binds)
                   execute and returns and array ( rows ) of objects ( row )
              

Here is an example using some of these methods with the expectable output shown below.


/* Look up a single value to use in a bind later */
var user = util.executeReturnOneCol('select user from dual');

var binds = {};
binds.name = 'EMP';
binds.who = user;

ctx.write('Using Binds:'+ binds + '\n');

ctx.write('***************************************************************\n');
ctx.write('*****    SIMPLE LOOP OF LIST OF LIST                ***********\n');
ctx.write('***************************************************************\n');


var ret = util.executeReturnListofList('select object_name,object_type from  all_objects where object_name = :name and owner = :who ',binds);

for (var i = 0; i < ret.length; i++) {
    ctx.write( ret[i][1]  + "\t" + ret[i][0] + "\n");
}

ctx.write('\n\n');

ctx.write('***************************************************************\n');
ctx.write('*****    SIMPLE LOOP OF LIST OF NAMES WITH BINDS     **********\n');
ctx.write('***************************************************************\n');


ret = util.executeReturnList('select object_name,object_type from  all_objects where object_name = :name and owner = :who ',binds);

for (i = 0; i < ret.length; i++) {
    ctx.write( ret[i].OBJECT_TYPE  + "\t" + ret[i].OBJECT_NAME+ "\n");
}





Tuesday, October 06, 2015

SQLcl - Oct 5th Edition

It was time for another push of the new sql command line , sqlcl.  Here's the changes in the latest build.  There are a number of bugs fixed relating to connecting to idle databases, alias feedback, and picking up the $ORACLE_HOME's jdbc driver when available.  There is only a couple functional changes.


First is that now the jdbc connect string itself can be specified.  This opens up a number of options on how to connect. This is list of valid jdbc prefixes.  While I've not yet tested it as noted in this list we will be testing loading sqlcl into the DB and trying it out in the JVM there.
  • jdbc:oracle:thin
  • jdbc:oracle:oci8
  • jdbc:oracle:kprb
  • jdbc:default:connection
  • jdbc:oracle:kprb:
  • jdbc:default:connection:





Next is the ansiconsole formatting of numbers was changed up a little.  The new syntax as 3 options.  One piece of feedback was that the numbers auto formatting with separators made some things difficult.  Now just setting the console formatter will NOT format numbers.  Instead the default option needs to be specified.  Also is the option to specify any number format which has to be in the form of a java format.  The syntax can be read here.


  • set sqlformat ansiconsole
  • set sqlformat ansiconsole default
  • set sqlformat ansiconsole <number format>


This script show an example of all the new options.


The only caution when using the @|... |@ syntax is that only one thing can be used at a time as the, used to separate the formatting options gets read by the java formatter as a number format.  If this is something that folks end up using a lot, I'll come up with something to let it blink and be red at the same time!



Friday, September 11, 2015

SQLCL - More secure, now with REST !

A new SQLCL build was just posted go grab it and kick the tires.  There are well over 100 bug fixes in there so it's better than ever.  Also there's some new things.


More Secure 


  Imagine you have an api that is called and a password or something is used in the parameters.  We use exec MY_API(...)  and it works just fine.  However consider if someone with access to v$sql they just got anything you passed into the api and quite easily.  Enter SQLCL new thing #1 starting with this build now changes all "exec my_api(...)" into an anonymous block with binds so those literals do not show up.



Procedure SECURE_API compiled

KLRICE@orcl >exec secure_api('my password');


PL/SQL procedure successfully completed.

KLRICE@orcl >select sql_text from v$sql where sql_text like '%secure_api%';


SQL_TEXT                                                       
BEGIN secure_api(:ZSqlDevUnIq1); END;                          
select sql_text from v$sql where sql_text like '%secure_api%'  


KLRICE@orcl >




REST Support

Always check the help, the tool highlights what is new to make it easy.  It only took Alex 8 minutes from the download being available to seeing the new features !



 This build adds the ability to extract ORDS services.  Once a rest service is defined and it's time to move it from one system to another this will make it easier than ever.  Simple spool to a file, export the definition and instant install script.

KLRICE@orcl >help
    ....
    REST
    ...
KLRICE@orcl >rest

REST
------

REST allows to export ORDS 3.X services.

      REST export                     -  All modules
      REST export >module_name>    -  Export a specific module
      REST export >module_prefix>  -  Export a specific module related to the given prefix
      REST modules                    -  List the available modules
      REST privileges                 -  List the existing privileges
      REST schemas                    -  List the available schemas
KLRICE@orcl >rest modules

NAME   PREFIX  STATUS     ITEMS_PER_PAGE  
demos  demos/  PUBLISHED  25              
hr     hr      PUBLISHED  25              


KLRICE@orcl >rest export hr

declare
  l_module_id number;
  l_template_id number;
  l_handler_id number;
begin

  l_module_id := ORDS_METADATA.ORDS_SERVICES.create_module( p_name => 'hr' , p_uri_prefix => 'hr' , p_items_per_page => 25 , p_status => 'PUBLISHED' );
  l_template_id := ORDS_METADATA.ORDS_SERVICES.add_template( p_module_id => l_module_id, p_uri_template => 'everything' , p_priority => 0 , p_etag_type => 'HASH' );
  l_handler_id := ORDS_METADATA.ORDS_SERVICES.add_handler( p_template_id => l_template_id, p_source_type => 'json/query' , p_method => 'GET' , p_items_per_page => 0 , p_source => 'select d.*,' || unistr('\000a')
   || '' || unistr('\000a')
   || '   cursor( select e.*' || unistr('\000a')
   || '           from hr.employees e where e.department_id = d.department_id) emps           ' || unistr('\000a')
   || '   from hr.departments d');

commit;
end;
KLRICE@orcl >

Tuesday, August 25, 2015

Yet another CSV -> Table but with pipleline function

Here's just one more variation on how to get a CSV into a table format.  It could have been done before but my google-fu couldn't find it anywhere.

First to get some sample data using the /*csv*/ hint in sqldev.




Then the results of putting it back to a table. The inline plsql is just to convert the text into a CLOB.



Now the details.

The csv parsing is completely borrowed(stolen) from another Chris where he create a csv parser in plsql here.

The changes I made which is probably obvious to use a plsql object and table of said object to put into a form that can be use in sql.


I'm sure there's optimizations that could be added in here since I almost never use plsql object. But here's the code that renders the above results.



drop function parse_csv;
drop type t_csv_col;
drop type t_csv_table;

CREATE OR REPLACE TYPE t_csv_table IS OBJECT
(
    col1   varchar2(4000),
    col2   varchar2(4000),
    col3   varchar2(4000),
    col4   varchar2(4000),
    col5   varchar2(4000),
    col6   varchar2(4000),
    col7   varchar2(4000),
    col8   varchar2(4000),
    col9   varchar2(4000),
    col10   varchar2(4000)
)
/

show errors

CREATE or replace TYPE t_csv_COL IS TABLE OF t_csv_table
/

show errors



create or replace function parse_csv(
  p_clob clob,
  p_delim varchar2 default ',',
  p_optionally_enclosed varchar2 default '"' ) 
  return t_csv_COL
  
  is
  --
  CARRIAGE_RETURN constant char(1) := chr(13);
  LINE_FEED constant char(1) := chr(10);
  --
  l_char char(1);
  l_lookahead char(1);
  l_pos number := 0;
  l_token varchar2(32767) := null;
  l_token_complete boolean := false;
  l_line_complete boolean := false;
  l_new_token boolean := true;
  l_enclosed boolean := false;
  --
  l_lineno number := 1;
  l_columnno number := 1;
 
  -- additions from Kris
  l_ret t_csv_COL;
  l_ret_row t_csv_table;
  l_ret_index number;
begin
  -- initialize things
  l_ret := t_csv_COL();
  l_ret.extend;
  l_ret_row := t_csv_table(null,null,null,null,null,null,null,null,null,null);
  loop
    -- increment position index
    l_pos := l_pos + 1;
 
    -- get next character from clob
    l_char := dbms_lob.substr( p_clob, 1, l_pos);
 
    -- exit when no more characters to process
    exit when l_char is null or l_pos > dbms_lob.getLength( p_clob );
 
    -- if first character of new token is optionally enclosed character
    -- note that and skip it and get next character
    if l_new_token and l_char = p_optionally_enclosed then
      l_enclosed := true;
      l_pos := l_pos + 1;
      l_char := dbms_lob.substr( p_clob, 1, l_pos);
    end if;
    l_new_token := false;
 
    -- get look ahead character
    l_lookahead := dbms_lob.substr( p_clob, 1, l_pos+1 );
 
    -- inspect character (and lookahead) to determine what to do
    if l_char = p_optionally_enclosed and l_enclosed then
 
      if l_lookahead = p_optionally_enclosed then
        l_pos := l_pos + 1;
        l_token := l_token || l_lookahead;
      elsif l_lookahead = p_delim then
        l_pos := l_pos + 1;
        l_token_complete := true;
      else
        l_enclosed := false;
      end if;
 
    elsif l_char in ( CARRIAGE_RETURN, LINE_FEED ) and NOT l_enclosed then
      l_token_complete := true;
      l_line_complete := true;
 
      if l_lookahead in ( CARRIAGE_RETURN, LINE_FEED ) then
        l_pos := l_pos + 1;
      end if;
 
    elsif l_char = p_delim and not l_enclosed then
      l_token_complete := true;
 
    elsif l_pos = dbms_lob.getLength( p_clob ) then
      l_token := l_token || l_char;
      l_token_complete := true;
      l_line_complete := true;
 
    else
      l_token := l_token || l_char;
    end if;
           
    -- process a new token
    if l_token_complete then
      -- dbms_output.put_line( 'R' || l_lineno || 'C' || l_columnno || ': ' || nvl(l_token,'**null**') );
       -- assign the column value
      case  l_columnno 
          when 1 then  l_ret_row.col1 := l_token;
          when 2 then  l_ret_row.col2 := l_token;
          when 3 then  l_ret_row.col3 := l_token;
          when 4 then  l_ret_row.col4 := l_token;          
          when 5 then  l_ret_row.col5 := l_token;
          when 6 then  l_ret_row.col6 := l_token;
          when 7 then  l_ret_row.col7 := l_token;          
          when 8 then  l_ret_row.col8 := l_token;
          when 9 then  l_ret_row.col9 := l_token;
          when 10 then l_ret_row.col10 := l_token;
          else dbms_output.put_line('Only supports up to 10:'||l_columnno);
      end case;
                            
      l_columnno := l_columnno + 1;
      l_token := null;
      l_enclosed := false;
      l_new_token := true;
      l_token_complete := false;
    end if;
 
    -- process end-of-line here
    if l_line_complete then
      l_ret(l_lineno):= l_ret_row;
      l_lineno := l_lineno + 1;
      l_columnno := 1;
      l_line_complete := false;
      l_ret.extend;
      l_ret_row := t_csv_table(null,null,null,null,null,null,null,null,null,null);

    end if;
  end loop;
  return l_ret;
end parse_csv;
/
show errors




Wednesday, July 15, 2015

Connecting to DBaaS, did you know this trick?

SSHTunneling Trick


The new command line is a must try, says 10 out of 10 people that built it.  The tool has sshtunneling of ports built in as described by Barry.

This means you can script opening your sshtunnel from the command line and run sql very quickly.  Here's the one I used recently at Kscope15.


Now the trick? is that once this port is forwarded, any tool can now use it.  In case you ( like me ) forgot which port is forwarded since it's all scripted so as not to be thought about again.  There's a "show sshtunnel" command built in also



Now I can take this over to any tool, which for me is sqldev, and use that port to make a connection




Yes, a NEW demo on EMP 

If you thought you'd seen ever demo possible off the data in emp.  Here's one more. Salary represented as beer.


Tuesday, June 30, 2015

New Tools releases , now with Java

What's New

  For the 90%+ of people using sqldev/modeler on windows, the JDK bundled versions are back.  So no more debating what to install or worrying about conflicting Java versions.

  Lots of bug fixes.

   My favorite bug is now fixed so you can use emojis in your sql> prompt.


RESTful CSV Loading

  We wrapped he same CSV import code in SQL Developer into the REST Auto-Enablement feature. Here's a very short example of javascript batch inserting 4 records into the EMP table.  I'm not sure it can get much easier.


// CSV (POST http://localhost:8888/ords/klrice/emp/batchload)

jQuery.ajax({
    url: "http://localhost:8888/ords/klrice/emp/batchload?" + jQuery.param({
        "responseFormat": "RAW",
    }),
    type: "POST",
    headers: {
        "Content-Type": "text/csv",
    },
    processData: false,
    data: "empno,ename,job,mgr,hiredate,sal,comm,deptno
5001,M,SPY MAST,,2005-05-01 11:00:01,4000,,11
5002,J.BOND,SPY,0,2005-05-01 11:00:01,2000,,11
5003,R.Cooper,SOFTWARE,0,2005-05-01 11:00:01,10000,,11
5004,Max,DENTIST,0,2005-05-01 11:00:01,5000,,11
",
})
.done(function(data, textStatus, jqXHR) {
    console.log("HTTP Request Succeeded: " + jqXHR.status);
    console.log(data);
})
.fail(function(jqXHR, textStatus, errorThrown) {
    console.log("HTTP Request Failed");
})
.always(function() {
    /* ... */
});


Here's the same example in Paw which is a great client if are on OSX
  


While this is a very simple case, there are a lot of switches to customize this importing.  Those options are all in the doc found here:  http://docs.oracle.com/cd/E56351_01/doc.30/e56293/develop.htm#BABCACEC

Go Forth and Download



SQL Developer 4.1.1

Data Modeler 4.1.1

REST Data Services 3.0.1

SQLCL - Main Line Build
http://www.oracle.com/technetwork/developer-tools/sql-developer/downloads/index.html
( Last Link on the bottom of the sqldev download page )

Tuesday, June 16, 2015

SQLcl , yet again

By the Numbers


There's a new SQLcl out for download.  In case, there are too many to keep track of the build numbers are quite easy to tell if you have the latest.  The build posted today is  sqlcl-4.2.0.15.167.0827-no-jre.zip

Here's what we are doing
4.2.0 <-- doesn't matter at all
15     <- year
167   <- day in julian
0827 <- time the build was done

So yes, this build was done today at 8am !


Info on PL/SQL

I don't think I covered this before.  The info on any plsql gives you actual useful stuff.  It prints out a stub of pl/sql that can be copy and pasted to get going much quicker than looking at a describe tabular output of args.  I'll pick on dbms_metadata which is a huge package.



This can still be way too much 'info' if the packages are very large.  The other part of info is that you can get a subset.  Here the ".a" was added to filter it down.  I'll blog more on why I'm showing metadata as there's some pretty interesting usages that I just found how to use.



Show All

Not much to see here.  Just looks like it should.

Show All+

Much like the info/info+ we added lots more.  Just scroll over this or try it out and see how much more is there.  It should give you just about anything you may be interested in.




KLRICE@orcl🍺 >show all+

Show All:
========================================
appinfo is OFF and set to "SQL Command Line"
arraysize 15
autocommit OFF
autoprint OFF
autotrace OFF
BTitle OFF
colsep " "
concat "." (hex 5c)
copycommit 0
define "&"
echo OFF
escape OFF
FEEDBACK ON for 6 or more rows
heading ON
headsep "|" (hex 7c)
linesize 136
long 80
longchunksize 80
null null
numformat ""
numwidth 10
pagesize 44
PAUSE is OFF
release 1201000200
scan ON
serveroutput OFF
space " "
spool OFF
sqlcode 0
sqlprompt "@|red KLRICE|@@@|green orcl|@@|blue 🍺 >|@ "
suffix "sql"
termout ON
time OFF
timing OFF
trimout OFF
trimspool OFF
TTitle OFF
USER is "KLRICE"
verify ON
wrap : lines will be wrapped

Show All+ (Single Line):
========================================
_restrict 0
buffer SQL
CD : Default
commandline true
con_id 3
con_name ORCL
encoding "UTF-8"
LDAPCON : Default
net: ON
noverwrite: WARN
null null
SQL Format : ansiconsole
SQLPATH : /Users/klrice/workspace/raptor_common/:.
systemout "true"
Toplevel TRUE 
url top_base_url ""    base_url ""    node_url (Parent) "file:/Users/klrice/workspace/raptor_common"
xquery BASEURI "" CONTEXT "" NODE DEFAULT ORDERING DEFAULT

Show All+ (Multi Line):
========================================

CONNECTION:
========================================
CONNECTION:
 KLRICE@jdbc:oracle:thin:@localhost:1521/orcl 
CONNECTION_IDENTIFIER:
 orcl
CONNECTION_DB_VERSION:
 Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
 With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
NOLOG:
 false
PRELIMAUTH:
 false

DDL:
========================================
STORAGE : ON
INHERIT : ON
SQLTERMINATOR : ON
OID : ON
SPECIFICATION : ON
TABLESPACE : ON
SIZE_BYTE_KEYWORD : ON
PRETTY : ON
REF_CONSTRAINTS : ON
FORCE : ON
PARTITIONING : ON
CONSTRAINTS : ON
INSERT : ON
BODY : ON
CONSTRAINTS_AS_ALTER : ON
SEGMENT_ATTRIBUTES : ON

DEFINES:
========================================
DEFINE _DATE =  "16-JUN-15" (CHAR)
DEFINE _CONNECT_IDENTIFIER =  "orcl" (CHAR)
DEFINE _USER =  "KLRICE" (CHAR)
DEFINE _PRIVILEGE =  "" (CHAR)
DEFINE _SQLPLUS_RELEASE =  0401000000 (NUMBER)
DEFINE _EDITOR =  "vim" (CHAR)
DEFINE _O_VERSION =  "Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options" (CHAR)
DEFINE _O_RELEASE =  1201000000 (NUMBER)
DEFINE _PWD =  "/Users/klrice/workspace/raptor_common" (CHAR)

INSTANCE:
========================================
USERNAME KLRICE
INSTANCE_NAME cdb1
HOST_NAME klrice-nsvpn-dhcp-141-144-19-110.vpn.oracle.com
SID 21
VERSION 12.1.0.2.0
STARTUP_DAY 20150616

INTERNALERRORS:
========================================
sqldev.error "false"
sqldev.error.any.in "false"
script.runner.autocommit.errorflag "" 
sqldev.last.err.message "" 
sqldev.last.err.message.forsqlcode "" 

JDBC:
========================================
-- Database Info --
Database Product Name: Oracle
Database Product Version: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Database Major Version: 12
Database Minor Version: 1
-- Driver Info --
Driver Name: Oracle JDBC driver
Driver Version: 12.1.0.2.0
Driver Major Version: 12
Driver Minor Version: 1
Driver URL: jdbc:oracle:thin:@localhost:1521/orcl

NLS:
========================================
DB_TIMEZONE +00:00
NLS_CALENDAR GREGORIAN
NLS_CHARACTERSET AL32UTF8
NLS_COMP BINARY
NLS_CURRENCY $
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_DUAL_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_LANGUAGE AMERICAN
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NUMERIC_CHARACTERS .,
NLS_SORT BINARY
NLS_TERRITORY AMERICA
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
SESSION_TIMEZONE America/New_York
SESSION_TIMEZONE_OFFSET -04:00

PDBS:
========================================
CON_ID  NAME  OPEN_MODE   RESTRICTED  
3       ORCL  READ WRITE  NO          


SGA:
========================================
Total System Global Area   838860800 bytes      
Fixed Size                   2929936 bytes      
Variable Size              545262320 bytes      
Database Buffers           285212672 bytes      
Redo Buffers                 5455872 bytes      

KLRICE@orcl🍺 >




Thursday, June 04, 2015

ORDS - Auto REST table feature


Got a question on how easy it is to use ORDS to perform insert | update | delete on a table.  Here's the steps.

1) Install ORDS ( cmd line or there's a new wizard in sqldev )

2) Enable the schema and table in this case klrice.emp; ( again there's a wizard in sqldev )


BEGIN

    ORDS.ENABLE_SCHEMA(p_enabled => TRUE,
                       p_schema => 'KLRICE',
                       p_url_mapping_type => 'BASE_PATH',
                       p_url_mapping_pattern => 'klrice',
                       p_auto_rest_auth => FALSE);
    

    ORDS.ENABLE_OBJECT(p_enabled => TRUE,
                       p_schema => 'KLRICE',
                       p_object => 'EMP',
                       p_object_type => 'TABLE',
                       p_object_alias => 'emp',
                       p_auto_rest_auth => FALSE);
    
    commit;

END;



3) Check the catalog.  Everything enabled will be in the catalog wether they be auto-enabled tables or lovingly crafted ones.  For this case, we can see /emp/ is there



4) Do work.  Here I'm inserting a new record.  The PK is referenced in the URI and in the JSON doc.  Notice the http method here is a PUT and that the content type is set to application/json.



5) The results.  It worked



6) Now for doing an update.





7) The results is that the response back from ORDS is the correct values.




8) Now go show the people hand coding REST what took 1 plsql block to enable !

Wednesday, June 03, 2015

Javascript Inside the Database


Where there's a will, there's a way. Not sure on the best use of this off hand but here it is.

Javascript inside an Oracle Database.

Since Javascript is included in java via the ScriptEngine framework. It's fairly easy to add a java wrapper to call the javascript engine.
create or replace and compile java source named "EvalScript" as
import javax.script.*;
public class EvalScript {
    public static String eval(String argument) throws Exception {
        // create a script engine manager
        ScriptEngineManager factory = new ScriptEngineManager();
        // create a JavaScript engine
        ScriptEngine engine = factory.getEngineByName("javascript");

        // set the param into the javascript context
        engine.put("s", argument);

        // evaluate JavaScript code from String
        return  engine.eval("var a='hi';  a + ', ' + s;").toString();
        //return "a";
    }
}
/

Then the standard plsql wrapper to call a java stored procedure.
create or replace function eval (argument in varchar2)
 return varchar2  as language java
name 'EvalScript.eval(java.lang.String) return java.lang.String';


Now you can simply call that function and the return is from javascript OR make a REST call via ORDS.

This is a simple REST call that takes in {name} and passes it to the function just created.






Then pass in the value to be passed in to the javascript engine and you now have ORDS + DB + Server Side javascript.




Tuesday, May 05, 2015

REST Data Services and SQL Developer












The database tools team released 3 new GA releases and an update to our SQLCL.


Official Releases are here:

   SQL Developer, Modeler, and Data Miner:
       https://blogs.oracle.com/otn/entry/news_oracle_updates_development_tools
       https://blogs.oracle.com/datamining/entry/oracle_data_miner_4_1

  REST Data Services now with SODA
       https://blogs.oracle.com/otn/entry/news_oracle_rest_enables_oracle
       https://blogs.oracle.com/databaseinsider/entry/oracle_database_12c_as_a

Some of the feedback we received was that ORDS was difficult to setup and running.  This prompted an overhaul of the install process.  There is now a few ways to get going.  As with earlier releases, there is a command line.  There is now a 'simple' , 'advanced' , and an option to parameterize ( silent ) install.  The simple is the default and will ask minimal information.  The catch is that it will make assumptions.  Some of those assumptions like a USERS tablespace may not work for your database.  In this case try the advanced, it will prompt for _everything_ and then some for a fully customized install.





Taking the simple to install one step more, REST Data Services is now embedded into SQL Developer to be able to get up and running faster than ever.  Under the Tools menu there is now options to Manage,Run,Install, and Uninstall ( although why would anyone? ).




The first page of the installation will ask for 2 things.  First if there is a specific ORDS to use or to use the embedded one.  The version number is printed for which ever is chosen to ensure the proper decision.  The second is simply where to store the configuration.  This configuration is just the normal files and could be portable to take to any other ORDS to set it up.

The next is the database credentials and connect strings.  There is something new here in that the user is now ORDS_PUBLIC_USER.  This is a change that has come in the decoupling from Application Express.  Now ORDS is completely standalone and Apex is no longer required.
If connecting to the ORDS_PUBLIC_USER fails ( as in user doesn't exist ),  the wizard will prompt for the credentials needed to install ORDS.
When using APEX or any OWA toolkit based applications, the db user for those to be executed as needs to be entered.

In ORDS 2.x, the repository where the REST definitions were stored was inside Application Express' repository.  ORDS 3.0 has it's on dedicated ORDS_METADATA database user which holds this information.  This means that there could be definitions in both places.  For Apex 5.0, the screens in the sql workshop will continue to operate against the tables in the Apex repository.  That means to continue to operate this wizard will have to be filled out for the 2.0 users to access this data.

When operating like this.  ORDS will use the metadata from both the 3.0 repository and the Apex repository.  This allows the flexibility to the developer where they would continue to develop.  The new features such as Auto Enablement will only be in the new ORDS 3.0 repository.

What this means to you.
  - Continue to use the APEX screens, everything will work just fine
  - Using the REST definitions in SQL Developer, will go into the new ORDS schema
  - If there is a conflict, ORDS repository will be chosen over the Apex repository.




This screen will give the option to start ORDS up as soon as the wizard finishes installing and setting up.  Also is the location for the static file, think /i/ for apex.


The ORDS Administrator and ORDS RESTful User are used to connect to the running ORDS from SQL Developer for remote administrations.

A quick final review of the options and it's up and running.




Friday, May 01, 2015

Logstash and Oracle Database

    For anyone that doesn't know what Logstash is head over to http://logstash.net/ and watch the video on the home page.  Robin Moffatt has done a great blog post on the full ELK stack with OBIEE.

    This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql.  A huge caution that this is a 0.01 attempt and will get better as I need it to.  This is a basic config of doing a select sessions every 5 seconds and report that in this case to just STDOUT.  There are many choices of where to send this information like to elastic search for visualization into Kibana as Mark showed in the OBIEE example.

    If anyone uses this, changes it, likes it, doesn't like it,  let me know.

input { 
  orasql { sql      => "SELECT count(1) sessions from v$session "
           dbuser   => "klrice"
           dbpasswd => "klrice"
           dburl    => "localhost/orcl"
           interval =>  5
      } 
}
output {
 stdout { codec => rubydebug }
}

The running of this is just needs to add in the plugin path for be able to find the orasql input filter.

     The filter itself is where everything oracle-wise is happening.  Since Logstash is written in JRuby, I'm able to use the normal Oracle JDBC Driver to access the database.  For this case I have a ./lib folder where I have a local copy.  This could just as easily be $ORACLE_HOME/jdbc/lib/ojdbc6.jar to ensure using the same as the database version when on the same server.

     Since the connect and establishing a session is expensive when repeatedly issuing sql, I have kept a connection open.  The connection could be closed for any number of reasons so there's an auto-reconnect in here also by calling the getConnection before any sql exec.   Before someone asks: no, sqldev will not get an auto-reconnect like this.




# encoding: utf-8
require "logstash/inputs/base"
require "logstash/namespace"
require "socket" # for Socket.gethostname
require 'java'
# SETUP THE JDBC DRIVER TO YOUR LOCATION
$CLASSPATH << "lib/ojdbc6.jar"

java_import 'oracle.jdbc.OracleDriver'
java_import 'java.sql.DriverManager'


# Run sql line tools and capture the whole output as an event.
#
# Notes:
#
# * The '@source' of this event will be the sql run.
# * The '@message' of this event will be the entire stdout of the sql
#   as one event.
#
class LogStash::Inputs::OraSQL < LogStash::Inputs::Base

  config_name "orasql"
  milestone 2
   
  $currConnection = nil

  default :codec, "plain"

  # Set this to true to enable debugging on an input.
  config :debug, :validate => :boolean, :default => false, :deprecated => "This setting was never used by this plugin. It will be removed soon."

  # SQL to run. For example, "select * from emp"
  config :sql, :validate => :string, :required => true

  # dbuser to run. For example, "select * from emp"
  config :dbuser, :validate => :string, :required => true ,  :default => "/"

  # dbpass to run. For example, "select * from emp"
  config :dbpasswd, :validate => :string, :required => false , :default => ""

  # dburl to run. For example, "select * from emp"
  config :dburl, :validate => :string, :required => true , :default => "//localhost/orcl"

  # Interval to run the sql. Value is in seconds.
  config :interval, :validate => :number, :required => true , :default => 120

  public
  def register
    @logger.info("Registering SQL Input", :type => @type,
                 :sql => @sql, :interval => @interval)
  end # def register
  
  public 
  def getConnection
      if $currConnection == nil  or  ! $currConnection.isValid(100)  
           oradriver = OracleDriver.new
           DriverManager.registerDriver oradriver

           con_props = java.util.Properties.new
           con_props.setProperty("user", @dbuser)
           con_props.setProperty("password", @dbpasswd )
 
           conn =  Java::oracle.jdbc.OracleDriver.new.connect('jdbc:oracle:thin:@' + @dburl, con_props)

           conn.auto_commit = false

           $currConnection = conn
      end 
      
      return $currConnection  
       
  end # end getConnection

  public
  def run(queue)
    hostname = Socket.gethostname
   
    loop do
      start = Time.now
      @logger.info? && @logger.info("Running SQL", :sql => @sql)

      conn = getConnection

      stmt = conn.prepare_statement @sql
      rset = stmt.execute_query
      while ( rset.next )
         i=1
         event =  event = LogStash::Event.new
         decorate(event)
         cols = rset.getMetaData.getColumnCount
         msg = ""
         r=0
         while ( i <= cols ) 
             val = rset.getString(i)
             if ( val != nil ) 
                if ( r > 0 )
                   msg = msg + ","
                end
                event[ rset.getMetaData.getColumnName(i).downcase ] =  val
                msg = msg +  "\"" +rset.getMetaData.getColumnName(i).downcase +  "\" : \"" + val + "\"" 
                r=r+1
             end
             i = i + 1
         end
         event['message'] = "{" + msg + "}"
        queue << event
      end
      conn.close

      duration = Time.now - start
      @logger.info? && @logger.info("Command completed", :sql => @sql,
                                    :duration => duration)

      # Sleep for the remainder of the interval, or 0 if the duration ran
      # longer than the interval.
      sleeptime = [0, @interval - duration].max
      if sleeptime == 0
        @logger.warn("Execution ran longer than the interval. Skipping sleep.",
                     :sql => @sql, :duration => duration,
                     :interval => @interval)
      else
        sleep(sleeptime)
      end
    end # loop
  end # def run
end # class LogStash::Inputs::OraSQL





Thursday, April 16, 2015

Repeating another SQLCL EA release

    It's been a month and a lot of bugs have been fixed in that time and new features added. That adds up to time for another EA for our new sqlcl.  The third sqlcl EA is out now and can be found here on the SQL Developer 4.1 download page.  

  Not sure what this sqlcl thing is ? Check here.


LOAD:  Making CSV loading to the db very easy

Here's a very quick example of the possibilities.  Here 

-- drop it to be sure
drop table emp2;

-- create an empty table
create table emp2 as select * from emp where rownum=0;

-- don't need to see it
set term off
-- set the format to CSV
set sqlformat csv
-- spool
spool emp.csv
-- spooling....
select * from emp;
-- spooled
spool off

-- need to see again
set term on
-- load into the new table
load EMP2 emp.csv



The output from running the script makes it very easy to see what happened.



REPEAT : Build your own monitor

This is another very simple yet useful command.  The idea is to basically build your own sql that your may want to run to see what's going on.  This example is a simple select against v$session with a decode on the command to see what the sessions are doing.  Running this is easy, running it again and again mean hitting / and return a lot.

Enter the repeat.  Just specify how many times and what the sleep interval is and sqlcl takes care of running it.



In the header, there's a status to who what iteration it is what the time is ( and yes I know a bug in the number format ) and what the delay is to running it again.




The Report being used

The SQL I'm using here is the same sql in the SQL Developer Sessions report.  It can be found off the Tools->Sessions menu.  Additionally in the Reports section under DBA reports.



select vs.sid ,serial# serial, vs.sql_id,
       vs.username "Username",
       case when vs.status = 'ACTIVE' then last_call_et else null end "Seconds in Wait",
       decode(vs.command,  
                         0,null, 
                         1,'CRE TAB', 
                         2,'INSERT', 
                         3,'SELECT', 
                         4,'CRE CLUSTER', 
                         5,'ALT CLUSTER', 
                         6,'UPDATE', 
                         7,'DELETE', 
                         8,'DRP CLUSTER', 
                         9,'CRE INDEX', 
                         10,'DROP INDEX', 
                         11,'ALT INDEX', 
                         12,'DROP TABLE', 
                         13,'CRE SEQ', 
                         14,'ALT SEQ', 
                         15,'ALT TABLE', 
                         16,'DROP SEQ', 
                         17,'GRANT', 
                         18,'REVOKE', 
                         19,'CRE SYN', 
                         20,'DROP SYN', 
                         21,'CRE VIEW', 
                         22,'DROP VIEW', 
                         23,'VAL INDEX', 
                         24,'CRE PROC', 
                         25,'ALT PROC', 
                         26,'LOCK TABLE', 
                         28,'RENAME', 
                         29,'COMMENT', 
                         30,'AUDIT', 
                         31,'NOAUDIT', 
                         32,'CRE DBLINK', 
                         33,'DROP DBLINK', 
                         34,'CRE DB', 
                         35,'ALTER DB', 
                         36,'CRE RBS', 
                         37,'ALT RBS', 
                         38,'DROP RBS', 
                         39,'CRE TBLSPC', 
                         40,'ALT TBLSPC', 
                         41,'DROP TBLSPC', 
                         42,'ALT SESSION', 
                         43,'ALT USER', 
                         44,'COMMIT', 
                         45,'ROLLBACK', 
                         46,'SAVEPOINT', 
                         47,'PL/SQL EXEC', 
                         48,'SET XACTN', 
                         49,'SWITCH LOG', 
                         50,'EXPLAIN', 
                         51,'CRE USER', 
                         52,'CRE ROLE', 
                         53,'DROP USER', 
                         54,'DROP ROLE', 
                         55,'SET ROLE', 
                         56,'CRE SCHEMA', 
                         57,'CRE CTLFILE', 
                         58,'ALTER TRACING', 
                         59,'CRE TRIGGER', 
                         60,'ALT TRIGGER', 
                         61,'DRP TRIGGER', 
                         62,'ANALYZE TAB', 
                         63,'ANALYZE IX', 
                         64,'ANALYZE CLUS', 
                         65,'CRE PROFILE', 
                         66,'DRP PROFILE', 
                         67,'ALT PROFILE', 
                         68,'DRP PROC', 
                         69,'DRP PROC', 
                         70,'ALT RESOURCE', 
                         71,'CRE SNPLOG', 
                         72,'ALT SNPLOG', 
                         73,'DROP SNPLOG', 
                         74,'CREATE SNAP', 
                         75,'ALT SNAP', 
                         76,'DROP SNAP', 
                         79,'ALTER ROLE', 
                         79,'ALTER ROLE', 
                         85,'TRUNC TAB', 
                         86,'TRUNC CLUST', 
                         88,'ALT VIEW', 
                         91,'CRE FUNC', 
                         92,'ALT FUNC', 
                         93,'DROP FUNC', 
                         94,'CRE PKG', 
                         95,'ALT PKG', 
                         96,'DROP PKG', 
                         97,'CRE PKG BODY', 
                         98,'ALT PKG BODY', 
                         99,'DRP PKG BODY', 
                         to_char(vs.command)) "Command",
    vs.machine "Machine",
    vs.osuser "OS User", 
    lower(vs.status) "Status",
    vs.program "Program",
    vs.module "Module",
    vs.action "Action",
    vs.resource_consumer_group,
    vs.client_info,
    vs.client_identifier
from gv$session vs 
where vs.USERNAME is not null
and nvl(vs.osuser,'x') <> 'SYSTEM'
and vs.type <> 'BACKGROUND'
order by 1;

Friday, January 30, 2015

3 new things about sdsql


New Name !


The first is a new name this EA it's named sqlcl for sql command line.  However, the binary to start it up is simply sql.  Nothing is easier when you need to run some sql than typing 'sql' and hitting enter.


#./sql klrice/klrice@//localhost/orcl

SQLcl: Release 4.1.0 Beta on Fri Jan 30 12:53:05 2015

Copyright (c) 1982, 2015, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production 


There's never too much Information


The next is a new information command has a lot more in it now.  Hit me up on twitter if you can think of more things it should show some of the things here are directly from folks doing exactly that. Sometimes functional specs can be done in 140 chars.



KLRICE@ORCL>
KLRICE@ORCL>info hr.employees
TABLE: EMPLOYEES 
  LAST ANALYZED:2015-01-17 12:06:26.0 
  ROWS         :107 
  SAMPLE SIZE  :107 
  INMEMORY     :DISABLED 
  COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    COMMENTS
*EMPLOYEE_ID     NUMBER(6,0)         No               Primary key of employees table.
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes              First name of the employee. A not null column.
 LAST_NAME       VARCHAR2(25 BYTE)   No               Last name of the employee. A not null column.
 EMAIL           VARCHAR2(25 BYTE)   No               Email id of the employee
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes              Phone number of the employee; includes country code and area code
 HIRE_DATE       DATE                No               Date when the employee started on this job. A not null column.
 JOB_ID          VARCHAR2(10 BYTE)   No               Current job of the employee; foreign key to job_id column of the
                                                      jobs table. A not null column.
 SALARY          NUMBER(8,2)         Yes              Monthly salary of the employee. Must be greater
                                                      than zero (enforced by constraint emp_salary_min)
 COMMISSION_PCT  NUMBER(2,2)         Yes              Commission percentage of the employee; Only employees in sales
                                                      department elgible for commission percentage
 MANAGER_ID      NUMBER(6,0)         Yes              Manager id of the employee; has same domain as manager_id in
                                                      departments table. Foreign key to employee_id column of employees table.
                                                      (useful for reflexive joins and CONNECT BY query)
 DEPARTMENT_ID   NUMBER(4,0)         Yes              Department id where employee works; foreign key to department_id
                                                      column of the departments table

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

KLRICE@ORCL>




Info doesn't end there, you can do info+ and get different data and I'm in the process of changing the indexes to show some stats also like the main table.

KLRICE@ORCL>info+ hr.employees
TABLE: EMPLOYEES 
  LAST ANALYZED:2015-01-17 12:06:26.0 
  ROWS         :107 
  SAMPLE SIZE  :107 
  INMEMORY     :DISABLED 
  COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM  
*EMPLOYEE_ID     NUMBER(6,0)         No               100                   206                   107            NONE       
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes              Adam                  Winston               91             NONE       
 LAST_NAME       VARCHAR2(25 BYTE)   No               Abel                  Zlotkey               102            NONE       
 EMAIL           VARCHAR2(25 BYTE)   No               ABANDA                WTAYLOR               107            NONE       
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes              011.44.1343.329268    650.509.4876          107            NONE       
 HIRE_DATE       DATE                No               1987.06.17.00.00.00   2000.04.21.00.00.00   98             NONE       
 JOB_ID          VARCHAR2(10 BYTE)   No               AC_ACCOUNT            ST_MAN                19             NONE       
 SALARY          NUMBER(8,2)         Yes              2100                  24000                 57             NONE       
 COMMISSION_PCT  NUMBER(2,2)         Yes              .1                    .4                    7              NONE       
 MANAGER_ID      NUMBER(6,0)         Yes              100                   205                   18             NONE       
 DEPARTMENT_ID   NUMBER(4,0)         Yes              10                    110                   11             NONE       

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

KLRICE@ORCL>


Those who don't know history are doomed to repeat.


The last thing to be sure to take a look at is the history. History by itself is great however we wanted to show how often the history is uses so there's now "history usage" which will show how many times that history item has been run.  The other thing new here is that instead of timing statements only when user request it, we now time and record that time for everything.  You can access is very simply with "history time" that will show the timings of how long the things in the history took.  The format of the time is HH:MI:SS.fff and we shorten the leading numbers when zero to be able to read and format it easier.






KLRICE@ORCL>history time
....
 20           info s
 21  (00.050) select table_name from user_tables
 22           info+ s
 23           info hr.dept
 24           into scott.emp
 25           info scott.emp
 26           info+ scott.emp
 27           info all_tables
 28           info all_tab_comments
 29  (04.531) select * from all_tables t,all_tab_comments tc where t.owner = tc.owner and t.table_name = tc.table_name
 30  (00.420) select * from all_tables t,all_tab_comments tc  where t.owner = tc.owner (+) and t.table_name = tc.table_name(+) 



While this is 3 really quick things, there's a lot more in there with this new EA. We will blog the other new things more in the coming days.

Disqus for Kris' Blog