Tuesday, May 05, 2015
REST Data Services and SQL Developer
The database tools team released 3 new GA releases and an update to our SQLCL.
Official Releases are here:
SQL Developer, Modeler, and Data Miner:
https://blogs.oracle.com/otn/entry/news_oracle_updates_development_tools
https://blogs.oracle.com/datamining/entry/oracle_data_miner_4_1
REST Data Services now with SODA
https://blogs.oracle.com/otn/entry/news_oracle_rest_enables_oracle
https://blogs.oracle.com/databaseinsider/entry/oracle_database_12c_as_a
Some of the feedback we received was that ORDS was difficult to setup and running. This prompted an overhaul of the install process. There is now a few ways to get going. As with earlier releases, there is a command line. There is now a 'simple' , 'advanced' , and an option to parameterize ( silent ) install. The simple is the default and will ask minimal information. The catch is that it will make assumptions. Some of those assumptions like a USERS tablespace may not work for your database. In this case try the advanced, it will prompt for _everything_ and then some for a fully customized install.
Taking the simple to install one step more, REST Data Services is now embedded into SQL Developer to be able to get up and running faster than ever. Under the Tools menu there is now options to Manage,Run,Install, and Uninstall ( although why would anyone? ).
The first page of the installation will ask for 2 things. First if there is a specific ORDS to use or to use the embedded one. The version number is printed for which ever is chosen to ensure the proper decision. The second is simply where to store the configuration. This configuration is just the normal files and could be portable to take to any other ORDS to set it up.
The next is the database credentials and connect strings. There is something new here in that the user is now ORDS_PUBLIC_USER. This is a change that has come in the decoupling from Application Express. Now ORDS is completely standalone and Apex is no longer required.
If connecting to the ORDS_PUBLIC_USER fails ( as in user doesn't exist ), the wizard will prompt for the credentials needed to install ORDS.
When using APEX or any OWA toolkit based applications, the db user for those to be executed as needs to be entered.
In ORDS 2.x, the repository where the REST definitions were stored was inside Application Express' repository. ORDS 3.0 has it's on dedicated ORDS_METADATA database user which holds this information. This means that there could be definitions in both places. For Apex 5.0, the screens in the sql workshop will continue to operate against the tables in the Apex repository. That means to continue to operate this wizard will have to be filled out for the 2.0 users to access this data.
When operating like this. ORDS will use the metadata from both the 3.0 repository and the Apex repository. This allows the flexibility to the developer where they would continue to develop. The new features such as Auto Enablement will only be in the new ORDS 3.0 repository.
What this means to you.
- Continue to use the APEX screens, everything will work just fine
- Using the REST definitions in SQL Developer, will go into the new ORDS schema
- If there is a conflict, ORDS repository will be chosen over the Apex repository.
This screen will give the option to start ORDS up as soon as the wizard finishes installing and setting up. Also is the location for the static file, think /i/ for apex.
The ORDS Administrator and ORDS RESTful User are used to connect to the running ORDS from SQL Developer for remote administrations.
A quick final review of the options and it's up and running.
Labels:
ords
Friday, May 01, 2015
Logstash and Oracle Database
For anyone that doesn't know what Logstash is head over to http://logstash.net/ and watch the video on the home page. Robin Moffatt has done a great blog post on the full ELK stack with OBIEE.
This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql. A huge caution that this is a 0.01 attempt and will get better as I need it to. This is a basic config of doing a select sessions every 5 seconds and report that in this case to just STDOUT. There are many choices of where to send this information like to elastic search for visualization into Kibana as Mark showed in the OBIEE example.
If anyone uses this, changes it, likes it, doesn't like it, let me know.
The running of this is just needs to add in the plugin path for be able to find the orasql input filter.
The filter itself is where everything oracle-wise is happening. Since Logstash is written in JRuby, I'm able to use the normal Oracle JDBC Driver to access the database. For this case I have a ./lib folder where I have a local copy. This could just as easily be $ORACLE_HOME/jdbc/lib/ojdbc6.jar to ensure using the same as the database version when on the same server.
Since the connect and establishing a session is expensive when repeatedly issuing sql, I have kept a connection open. The connection could be closed for any number of reasons so there's an auto-reconnect in here also by calling the getConnection before any sql exec. Before someone asks: no, sqldev will not get an auto-reconnect like this.
This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql. A huge caution that this is a 0.01 attempt and will get better as I need it to. This is a basic config of doing a select sessions every 5 seconds and report that in this case to just STDOUT. There are many choices of where to send this information like to elastic search for visualization into Kibana as Mark showed in the OBIEE example.
If anyone uses this, changes it, likes it, doesn't like it, let me know.
input { orasql { sql => "SELECT count(1) sessions from v$session " dbuser => "klrice" dbpasswd => "klrice" dburl => "localhost/orcl" interval => 5 } } output { stdout { codec => rubydebug } }
The running of this is just needs to add in the plugin path for be able to find the orasql input filter.
Since the connect and establishing a session is expensive when repeatedly issuing sql, I have kept a connection open. The connection could be closed for any number of reasons so there's an auto-reconnect in here also by calling the getConnection before any sql exec. Before someone asks: no, sqldev will not get an auto-reconnect like this.
# encoding: utf-8
require "logstash/inputs/base"
require "logstash/namespace"
require "socket" # for Socket.gethostname
require 'java'
# SETUP THE JDBC DRIVER TO YOUR LOCATION
$CLASSPATH << "lib/ojdbc6.jar"
java_import 'oracle.jdbc.OracleDriver'
java_import 'java.sql.DriverManager'
# Run sql line tools and capture the whole output as an event.
#
# Notes:
#
# * The '@source' of this event will be the sql run.
# * The '@message' of this event will be the entire stdout of the sql
# as one event.
#
class LogStash::Inputs::OraSQL < LogStash::Inputs::Base
config_name "orasql"
milestone 2
$currConnection = nil
default :codec, "plain"
# Set this to true to enable debugging on an input.
config :debug, :validate => :boolean, :default => false, :deprecated => "This setting was never used by this plugin. It will be removed soon."
# SQL to run. For example, "select * from emp"
config :sql, :validate => :string, :required => true
# dbuser to run. For example, "select * from emp"
config :dbuser, :validate => :string, :required => true , :default => "/"
# dbpass to run. For example, "select * from emp"
config :dbpasswd, :validate => :string, :required => false , :default => ""
# dburl to run. For example, "select * from emp"
config :dburl, :validate => :string, :required => true , :default => "//localhost/orcl"
# Interval to run the sql. Value is in seconds.
config :interval, :validate => :number, :required => true , :default => 120
public
def register
@logger.info("Registering SQL Input", :type => @type,
:sql => @sql, :interval => @interval)
end # def register
public
def getConnection
if $currConnection == nil or ! $currConnection.isValid(100)
oradriver = OracleDriver.new
DriverManager.registerDriver oradriver
con_props = java.util.Properties.new
con_props.setProperty("user", @dbuser)
con_props.setProperty("password", @dbpasswd )
conn = Java::oracle.jdbc.OracleDriver.new.connect('jdbc:oracle:thin:@' + @dburl, con_props)
conn.auto_commit = false
$currConnection = conn
end
return $currConnection
end # end getConnection
public
def run(queue)
hostname = Socket.gethostname
loop do
start = Time.now
@logger.info? && @logger.info("Running SQL", :sql => @sql)
conn = getConnection
stmt = conn.prepare_statement @sql
rset = stmt.execute_query
while ( rset.next )
i=1
event = event = LogStash::Event.new
decorate(event)
cols = rset.getMetaData.getColumnCount
msg = ""
r=0
while ( i <= cols )
val = rset.getString(i)
if ( val != nil )
if ( r > 0 )
msg = msg + ","
end
event[ rset.getMetaData.getColumnName(i).downcase ] = val
msg = msg + "\"" +rset.getMetaData.getColumnName(i).downcase + "\" : \"" + val + "\""
r=r+1
end
i = i + 1
end
event['message'] = "{" + msg + "}"
queue << event
end
conn.close
duration = Time.now - start
@logger.info? && @logger.info("Command completed", :sql => @sql,
:duration => duration)
# Sleep for the remainder of the interval, or 0 if the duration ran
# longer than the interval.
sleeptime = [0, @interval - duration].max
if sleeptime == 0
@logger.warn("Execution ran longer than the interval. Skipping sleep.",
:sql => @sql, :duration => duration,
:interval => @interval)
else
sleep(sleeptime)
end
end # loop
end # def run
end # class LogStash::Inputs::OraSQL
Subscribe to:
Posts (Atom)