Quantcast
Channel: Rants & Raves – The Blog!

Shell Variable Indirection in a Database Build Script

$
0
0

Ever wanted to set a variable to the name of another variable, and from there, somehow get the value of the other variable? I did, recently, and this is what I had to do.

I work with numerous databases but of all the ones I have, there are only 18 different types and these cover all possible (at present) systems in production or development. The first 3 characters of $ORACLE_SID define the system name and we use a script that duplicates any of 18 template databases to create the desired new one. The script has to run on both the Bash and Korn shells, and must validate that the new database is going to be built from the correct database template – to catch DBA finger troubles! :)

The template databases exist as RMAN backups and have been created with the correct options, default tablespaces, users and all the desired options for the system, so after the build is complete, and the post-build validation script has been executed, the new database can be handed over to the users without any further changes. Passwords are set up randomly after the build has completed, by the build script, so anyone who knows the template database passwords won’t know them on a new build.

The script is passed a template database name on the command line and this needed to be validated to ensure that ORACLE_SID, the new database name, was permitted to be built with that particular template.

$ myscript.sh -t XXX ...

The script already validates XXX as one of the 18 allowed values, but a recent change means that now, the script needs to carry out 1 of 18 different validations depending on the XXX parameter passed at run time.

How difficult could it be? As it turned out, not very – once you know about variable indirection in bash, and the following code also works in the Korn shell which I also needed it to work on.

The Obligatory Hello World Example

The following example can be typed in at the Bash or Korn shell prompt.

ABC="Hello World"
X="ABC"
eval Y=\$"${X}"
echo "${Y}"

Hello World

Variable $ABC holds the value I am after, $X holds the name of the variable that holds the value I want. The eval function sets variable $Y to the value held in the variable whose name is held in $X. So $Y is set to “Hello World”. Simple!

The Database Creation Script

In my database creation script, all I had to do was set up a one variables for each of the different template databases allowed. The name of the variable had to match the name of the template database that would be passed in, in upper case, by the -t parameter.

duplicate_database -t T_7 -o $ORACLE_SID ...

In the validation function, all I had to do was get the correct list of valid database prefixes into a separate variable using indirection, and from there it was a simple case of greping to see if the desired system prefix was present in the allowed list.

The code looked remarkably similar to the following and, as ever, systems and database names are not based on reality – to protect the innocent!

...
#----------------------------------------------------------------------------
# The following list of variables holds the permitted 3 character prefix for 
# a database name created with the appropriate template.
#
# For example, If the template is T_1, only databases named SYSxxxxx and
# DBAxxxxx are valid.
#----------------------------------------------------------------------------
T_1="|SYS|DBA|"
T_2="|PAY|HMN|SOP|"
...
T_18="|PRE|RTL|XXX|DAT|"

#----------------------------------------------------------------------------
# A function to validate the database name passed in against a database type.
#----------------------------------------------------------------------------
# $1 is the Database Template passed on the command line with "-t T_1" etc.
# $2 is the database name to be built. (aka ORACLE_SID)
# 
# Note, this uses indirection and so the database type MUST match the name of a
# validation variable set up previously.
#
# Return Code: 
# $? = 0 if first three characters of ORACLE_SID are valid for the template.
# $? = 1 if not.
#----------------------------------------------------------------------------
validate_db_template()
{
   #----------------------------------------------------
   # Make sure both parameters come to us in upper case.
   #----------------------------------------------------
   DB_TEMPLATE=`echo $1 | tr '[:lower:]' '[:upper:]'`
   DB_PREFIX=`echo $2 | tr '[:lower:]' '[:upper:]'`

   #----------------------------------------------------
   # DB_TEMPLATE now holds the upper case T_1 .. T_18 
   # name of a template database that we will use to 
   # build a new database as per ORACLE_SID/$2/DB_PREFIX.
   # We now need to get a list of the valid database 
   # names for this template.
   #----------------------------------------------------
   eval VALID_LIST=\$"${DB_TEMPLATE}"

   #----------------------------------------------------
   # We only need the first 3 characters of the DB name.
   #----------------------------------------------------
   DB_PREFIX=`echo "${DB_PREFIX}"|cut -c 1-3`

   #----------------------------------------------------
   # Then check if it is in VALID_LIST.
   # $? = 0 if found, 1 if not.
   #----------------------------------------------------
   echo "${VALID_LIST}" | grep -qi "${DB_PREFIX}"
   return $?
}

To validate that a passed in template database permits the database named as per ORACLE_SID to be created, it was a simple matter to call validate_db_template passing the desired parameters, and check the value in $? after the function had returned:

...
validate_db_template "${TEMPLATE}" "${ORACLE_SID}"

if [ "${?}" != "0" ]
then
   echo "*** ERROR: validate_db_template failed"
   echo "*** ${ORACLE_SID} cannot be built with a database template of \"${TEMPLATE}\"."
   exit "${ERROR_DBNAME_VALIDATION_FAILED}"
fi
...

Using indirection in this manner saved me the horror of typing in a huge case statement where I would set the VALID_LIST according to what the current template name was. In addition, future amendments, perhaps for Oracle 12c, will simply require a new template variable and allowed systems to be created, no code need be changed.

You might not need to have a database duplication script like I have, but I’m sure that the ability to get the value of a variable whose name is unknown until run time, might prove useful.

Have fun.


Tnsnames Checker Utility

$
0
0

I have made avialable for free a utility that will parse a tnsnames.ora file and report back on any thing that it doesn’t like such as duplicate entries, invalid characters, redefinitions of parameters, errors etc etc.

It’s a small utility, based on the ANTLR4 parser/compiler generator tool. I’ve had an interest in compilers and parsers for many many years – more than I care to remember – but this is only my second ever parser tool of any great use.

You will need Java 6 (which is actually version 1.6 when you run java -version) or higher. It has been tested with 1.6 and 1.7. I don’t have 1.8 yet. (I loathe don’t actually like Java but in this case, I gritted my teeth and got dirty made an exception!)

It works on Windows or Linux.

The program itself is downloadable from here, and there is a small pdf file which tries to explain it as well. Enjoy. Source code will be up on GitHub at some point soon, but it’s getting too late tonight to start fiddling!

Its grammar is based on the Oracle specification for an 11gR2 tnsnames.ora file.

Convert a Tnsnames.ora File to a Toad Session Import File

$
0
0

Have you ever wanted a quick and easy way of converting all those database entries in your tnsnames.ora file, into something that Toad can use to populate the “sessions” grid? Read on.

Normally Toad offers you a drop down list of the various database entries in the tnsnames.ora that is being used, however, if your tnsnames.ora file contains an IFILE entry, then Toad doesn't follow the included file, and any aliases defined there - or in subsequent nested IFILEs - will not appear in the drop down list.

You can get around this by connecting to each database in turn and in doing so, this populates the grid of sessions in the "New Sessions" dialogue. However, this process is a tad on the fiddly side and very boring indeed. Therefore I've created a utility that allows you to read a tnsnames.ora file and from that, create a file that can be imported to populate the grid.

You will need Toad 11.x or higher to be able to import your connections. Previous versions do not have the ability to export and import the sessions. I tested this with Toad 11.6 which is the oldest version I currently have.

The utility is based on the tnsnames_checker that I previously announced. You can find that utility at this location.

This utility doesn't make any attempt at semantic validation though, however the lexer or parser may highlight some syntax errors in the tnsnames.ora file. Everything in the tnsnames.ora file which is a database alias entry, will be written to the output file.

Output is always to stdout and so, should be redirected to a proper file of your choosing at run time, if you wish to import the results that is.

Download and Install

Source code is available on GitHub. But you do not need it if you don't intend to build or modify the utility.

You need to download the compiled code from this location. It is then a simple case of unzipping it and running the tns2toad.cmd file if you are on Windows, or the tns2toad.sh script if you are on some form of Unix. Your PATH is assumed to contain the location of the java executable. You can check by running the java -version command. If it barfs, you need to sort out your PATH.

Java 1.6 (aka Java 6) is the minimum required version of Java. The software has been tested with Oracle's Java 6 and Java 7. It should work with Java 8 as all versions are supposed to be backward compatible, but I have not been able to test it with OpenJDK's version of Java.

Tns2toad is a command line utility and you should run it from a DOS or shell session while your current directory is the location where you unzipped it to.

The following is a list of files that you should find:

  • antlr-4.4-complete.jar : the ANTLR4 runtime support for the parser section of the code.
  • tns2toad.jar : the runtime support for the actual utility itself.
  • tns2toad.cmd : a batch file for Windows users.
  • tns2toad.sh : a shell script for Linux and/or Unix users.

If you need to change the classpath, edit the latter two files as necessary to suit your system.

Parameter Details

If you run the utility with an invalid parameter, the correct usage details will be displayed, as follows:

C:\Software\ANTLR\TNS2Toad\test>tns2toad --help

Invalid option '--help'

Usage: tns2toad  filename
Options:   --oracle_home  The default oracle home to be used.
           --user                    The default user for all connections.
Parameter: filename.                           The tnsnames.ora file to be parsed.

Tns2toad requires the options to be specified first and the mandatory file name last. You cannot mix and match. Options may be specified in any letter case, lower, upper or mixed. The options are as follows, and all are optional:

  • --help : Displays usage details. Technically this is an error, but any incorrect option will display the usage details as shown above.
  • --oracle_home : If you wish, you can set all entries in the generated file to use the same Oracle Home folder. This folder should be specified in full, on the command line. If omitted, the import file will not specify an Oracle Home and Toad will use whatever you have configured as the default when you run the import.

    If there are spaces or special characters in the path name, wrap the full path in double quotes. Beware, it is not likely that Oracle will work correctly from a folder which has spaces in the path name.

  • --user : If you wish, you can set each and every one of the imported sessions to use the same user name. Obviously, a tnsnames.ora file doesn't have user details, so by default, there will be none. If you use the same user on each (or most) of your connections, specify it here and save some typing later on in life.

Running the Utility

As mentioned above, the utility reads a tnsnames.ora file and writes a Toad connections export file to stdout so you will need to trap the output and redirect it to a file of your choosing.

To run the utility with all defaults set:

tns2toad c:\tns_admin\tnsnames.ora >c:\myToadSessions.txt

That will set OracleHome and User to blank and ConnectAs set to "Normal".

To run the utility with a specific Oracle Home for all connections:

tns2toad --oracle_home c:\oracle\product\11gr2\client1 c:\tns_admin\my_tnsnames.ora >c:\myToadSessions.txt

That will set OracleHome to the supplied value for all connections, User will be set to blank and ConnectAs will be set to "Normal".

To run the utility with a specific database login for all connections:

tns2toad --user system c:\tns_admin\tnsnames.ora >c:\myToadSessions.txt

That will set OracleHome to blank, User will be set to "system" for all connections and ConnectAs will be set to "Normal". If the user supplied is "sys" then the ConnectAs would be set to SYSDBA.

Output File Format

Each entry in the output file will resemble the following. There will be one section for each database alias in the input file:

[LOGIN1]
User=SYS
Server=barney
AutoConnect=0
OracleHome=c:\oracle\home
SavePassword=0
Favorite=0
SessionReadOnly=0
Alias=
Host=
InstanceName=
ServiceName=
SID=
Port=
LDAP=
Method=0
Protocol=TNS
ProtocolName=TCP
Color=8421376
ConnectAs=SYSDBA
LastConnect=19600407031549
RelativePosition=0
GUID=

There will be some other text at the end of the output which is required for Toad to recognise the file and to import it, but that is not shown here. The following entries are of note:

  • [LOGINn] : this is the section header. The numeric suffix will increase by 1 from 1 for each new entry. There will be one of these sections for each database alias found in the tnsnames.ora file.
  • RelativePosition : this is set to the [LOGINn] value minus 1. The grid starts numbering its entries from zero while the logins start numbering at 1. If your grid is currently sorted into any desired order, the RelativePosition value will be ignored and the entry will be placed in the grid according to your chosen sort order.
  • User : Normally blank and if so, you will be prompted at login to supply a user name and password for the connection. May be populated if you specified the --user option on the command line.
  • Server : this is the alias name, including domain name if present, read from the tnsnames.ora file. In the event that an entry in tnsnames.ora consists of an alias list, each one will get a separate entry in the output file.
  • OracleHome : Normally blank and if so, Toad will use the default Oracle Home at runtime, for the connection. May be populated if you specified the --oracle_home option on the command line.
  • ConnectAs : This will normally be "Normal" but if a --user sys option was specified on the command line, it will change to "SYSDBA" as all SYS connections must be as sysdba. Warning Don't tell Bert if you use SYS though! :-)

    There is no option available to allow connections as SYSOPER.

  • LastConnect : this is set to something resembling my date and time of birth in YYYYMMDDHHMMSS format. Yes, I am that old! This value should allow you to sort your grid by the Last Connect column and keep all the new tnsnames entries at the bottom. Until you need them of course.
  • Color : Similar to LastConnect above, this is set purely to separate the imported entries from the ones you added yourself.

    As far as I am aware, no-one in the world actually likes the teal colour - except a company I used to work for that is, sadly now no longer in business - so it should be safe enough to assume that it will indeed help keep the imported entries separate from your manually entered ones.

Obviously, passwords are not part of a tnsnames.ora file, so the utility is unable to set those up for you. Equally, these are encrypted based on your login to your computer amongst other things, and so, it's practically impossible;e for tns2toad to be able to set passwords.

And finally, at least for the UK, no, I haven't spelt favorite or color wrongly, Toad has! But that's how it has to be when dealing with "foreigners" ;-)

Importing the Results

The following applies to Toad 11.6 because that's how I tested it, later versions may be slightly different. Older versions will most likely not have the ability to import connection details, however, I have a plan ....see below.

  • Start Toad.
  • Click Session -> New Connection
  • When the dialogue appears, there will be two buttons showing icons resembling a 3.5" floppy disc, with a blue arrow - like the ones you can see somewhere close to here, over on the right. Buttons You want the one with the arrow pointing out of the disc. Hover over the icon and it should pop up a hint that says "import". Click it.
  • On the subsequent "Connections Import File" dialogue, navigate in the usual manner to the location where you saved your file. Select it, and click the "open" button.
  • After a short delay, the grid should be showing all the new connections.

If your grid is sorted by the Last Connect column, the new connections will be added at the bottom. Strangely, on my Toad at least, the date doesn't appear but maybe it's because it's such a long long time ago!

Ahem! No, it's because there was a bug in the LastConnect value, it had two extra digits! This has been fixed.

My grid now looks like the following, with the newly imported sessions nicely collected at the bottom.

Sessions

Deleting Extraneous Entries

In the unlikely event that you imported some sessions that you really do not need, simply select them (click, CTRL-click etc) and press the DEL key to delete the unwanted ones.

Did I mention a Plan?

Prior to Toad 11.x, it wasn't possible to import connections. There is a way to get around this, but I can't accept responsibility for foul ups and I have not tested this method. The format and content of the connections export file and the connections.ini file are remarkably similar.

You need to shut down Toad, and then find the user files location on your PC and, with Toad closed, copy the connections.ini file to a safe place.

Open the connections.ini file and replace the contents with the contents of the file generated by tns2toad. Before saving the file, scroll to the bottom and remove onlythe following line:

<<Split file here. CONNECTIONS.INI above, CONNECTIONPWDS.INI below>>

Save the file and exit. When you start Toad, the list of connections on the grid should be set to the ones you generated - without any passwords.

You could try appending the generated file contents to the existing connections.ini file but note that you will/may have to renumber the various section headers - I do not know how Toad copes when you import two connections with the same LOGINn name.

Enjoy.

Asmcmd or ASM Instance Backups or Queries Hang

$
0
0

Sometimes an ASM instance hangs for no apparent reason and this causes problems when backing up the ASM Metadata. Running queries against V$ASM_DISK and similar views may also hang. This blog post should go some way to helping diagnose the problem, and providing a fix.

ASM metadata backups on a couple of our servers had been failing, the backups were run from a system called CommVault and the job scheduler there showed that they simply sat at 0% forever, or would have if we allowed them! There were no error messages or codes to speak of – the job simply sits in CommVault at 0% and never ends.

This was tracked down to the md_backup command being run from asmcmd. Running the command manually also just hung and the session had to be killed to release it.

Using sqlplus on the ASM instance and attempting to query V$ASM_DISK or other similar ASM views, also hung.

Looking at V$SESSION in the ASM instance, with the following query shows the problem:

set lines 350 trimspool on pages 300

select sid, state, event, seconds_in_wait, blocking_session
from   v$session
where  blocking_session is not null
or sid in (select blocking_session 
           from   v$session 
           where  blocking_session is not null)
order by sid;
       SID STATE    EVENT                 SECONDS_IN_WAIT BLOCKING_SESSION
---------- -------- --------------------- --------------- ----------------
        15 WAITING  enq: DD - contention            73683              254
        16 WAITING  enq: DD - contention            15692              254
        17 WAITING  enq: DD - contention           117109              254
        93 WAITING  enq: DD - contention            61107              254
        95 WAITING  enq: DD - contention           242327              254
        96 WAITING  enq: DD - contention            68731              254
       167 WAITING  GPnP Get Item                 2471652
       172 WAITING  enq: DD - contention           117109              254
       173 WAITING  enq: DD - contention           147026              254
       176 WAITING  enq: DD - contention            37787              254
       177 WAITING  enq: DD - contention           658138              254
       178 WAITING  enq: DD - contention            42238              254
       251 WAITING  enq: DD - contention           315711              254
       253 WAITING  enq: DD - contention            97075              254
       254 WAITING  rdbms ipc reply                     0              167
       255 WAITING  enq: DD - contention             4140              254
       257 WAITING  enq: DD - contention           521537              254

17 rows selected.

Almost all hung sessions are waiting for sid 254. Sid 254 is itself waiting on 167 which is not waiting on a session, but on the GPnP Get Item event.

A search of MOS shows that this is caused by an unpublished bug. Note 1375505.1 which mentions killing the gpnpd.bin process with a HUP, which will cause it to immediately restart, refers the reader to note 1392934.1 for full details. That latter note simply says:

kill -HUP 

Full details indeed!

In our specific case, the following was required:

ps -ef | grep -i g[p]npd
grid      4084     1  0  Jul 15  ?        04:37:48 /app/gridsoft/11.2.0.3/bin/gpnpd.bin
su - grid
Password: ******

kill -HUP 4084

It can be seen that this is safe, according to Oracle, and the gpnpd.bin process will be automatically restarted – even on production systems!

ps -ef | grep -i g[p]npd
grid     19015     1 14 09:23:10 ?        00:00:00 /app/gridsoft/11.2.0.3/bin/gpnpd.bin

It can be seen from the above that the daemon is running and has a new pid and start time. If we check in the database again, there will be no waiting sessions and the ASM Metadata backups will work, as will querying V$ASM_DISK etc.

How to Fix a Broken ASM SPFILE, held within ASM

$
0
0

My server rebooted itself and when it came back up, none of the databases or ASM had restarted. Everything is 11.2.0.3 or 11.2.0.1 with ASM being 11.2.0.3 – so Oracle Restart should have kicked in.

As usual, any identifying names, servers, domains, databases etc have been obfuscated to protect the innocent.

$srvctl start asm
PRCR-1079 : Failed to start resource ora.asm
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'myserver.mydomain.net:1899'

The LOCAL_LISTENER parameter is incorrect, it should be ‘myserver.mydomain.com:1899′ with a ‘.com’ and not ‘.net’.

We have a problem in the spfile that needs to be fixed. Where is it located so that it can be converted to a pfile and corrected? The usual place to check is $ORACLE_HOME/dbs.

$cd $ORACLE_HOME/dbs
$ls spfile*
spfile* not found

It isn’t in the normal location, what does Oracle Restart know?

$srvctl config asm -a | grep -i spfile
Spfile: +DATA/asm/asmparameterfile/registry.123.123456789

The spfile name may also be listed in the alert.log as part of a startup. It is for me in this case:


$grep "^Using.*spfile" alert_+ASM.log | tail -1
Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.123.123456789

Now we have a “Catch 22 chickens and egg” problem. The spfile is located inside ASM and we can’t start ASM to extract and fix it, because we need the (broken) parameter file to start ASM.

There are numerous blog postings on the internet that explain how to start ASM, or extract the spfile, when the spfile it needs to start is in ASM, but due to a missing $GRID_HOME/gpnp/myserver/profiles/peer/profile.xml file, those were not an option here. (I think the problem is that the profile.xml is used by RAC only.)

On a normal database, you can create a pfile from the spfile even if the instance is not running. Will that work?

$sqlplus / as sysasm
Connected to an idle instance.

SQL> create pfile='/home/oracle/pfile.ora' from spfile='+DATA/asm/asmparameterfile/registry.123.123456789';
create pfile='/home/oracle/pfile.ora' from spfile='+DATA/asm/asmparameterfile/registry.123.123456789'
*
ERROR at line 1:
ORA-01565: error in identifying file '+DATA/asm/asmparameterfile/registry.123.123456789'
ORA-17503: ksfdopn:2 Failed to open file +DATA/asm/asmparameterfile/registry.123.123456789
ORA-01034: ORACLE not available

That was expected, but had to be tried!

Method 1

Maybe a default pfile can be created from the alert log’s listing of the non-default startup parameters from the last time it started?

$cd /app/oracle/diag/asm/+asm/+ASM/trace
$view alert_+ASM.log

...
Using parameter settings in server-side spfile +DATA/asm/asmparameterfile/registry.123.123456789
System parameters with non-default values:
  large_pool_size          = 12M
  instance_type            = "asm"
  remote_login_passwordfile= "EXCLUSIVE"
  local_listener           = "myserver.mydomain.com:1899"
  asm_diskstring           = "/dev/oracleasm/disks/disk*"
  asm_diskgroups           = "FRA"
  asm_power_limit          = 1
  diagnostic_dest          = "/app/oracle"
USER (ospid: 9251): terminating the instance due to error 119
Instance terminated by USER, pid = 9251

So that’s one way of extracting the non-default startup parameters into a temporary pfile, for those awkward times when you cannot get at the spfile to start ASM as the spfile is located within ASM itself. Extract the above settings from the alert.log and startup with that temporary pfile. Once started, create a new spfile, update Oracle Restart and Robert is your mother’s brother.

However, depending on how long ASM has been up, what’s to say that any of the listed parameters are still valid? After all, since startup, someone changed the LOCAL_LISTENER parameter and it was only when the instance next started up that the foul up became apparent.

Method 2

There is another way. Thinking, as they say outside the box (Yuk! I avoid cliches like the plague!) about how tnsnames.ora allows IFILE commands, suggests that perhaps Oracle might allow me to create a pfile which specifies the existing spfile name and lets me set the correct LOCAL_LISTENER parameter to overwrite the broken setting in the spfile?

I confess, I also had a very vague recollection from way back when spfiles were first introduced, that I had seen/read/heard/tried something like this already, but as mentioned, it was a very vague recollection! Nevertheless, let’s create a plain vanilla pfile:

$vi /home/oracle/initASMtemp.ora

*.spfile="+DATA/asm/asmparameterfile/registry.123.123456789"
*.LOCAL_LISTENER='myserver.mydomain.com:1899'

The correction goes after the spfile, so that it takes effect rather than being overridden by the broken one in the spfile – assuming this trick works!

$sqlplus / as sysasm
Connected to an idle instance.

SQL> startup pfile='/home/oracle/initASMtemp.ora';
ASM instance started

Total System Global Area  283930624 bytes
Fixed Size                  2181896 bytes
Variable Size             256582904 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted

We have a running ASM system!

Fix the broken parameter in the existing spfile:

SQL> alter system set local_listener='myserver.mydomain.com:1899' scope=spfile;
System altered.

SQL> show parameter local

NAME             TYPE        VALUE
---------------- ----------- -------------------------------
local_listener   string      myserver.mydomain.com:1899

A shutdown and restart later and the spfile is once more working correctly.

Add and Drop Discs From ASM in a Single Command

$
0
0

Recently I was tasked to do something that I hadn’t done before. I was required to swap out all the existing discs in the two diskgroups +DATA and +FRA, with minimal downtime. Almost all the places I looked seemed to indicate that I had to add the new discs, re-balance, drop the old discs and re balance again. My colleague, Ian Greenwood, had a much better idea – thanks Ian.

alter diskgroup DATA add disk
--
'/path/to/disk_1' name DISK_1001,
'/path/to/disk_2' name DISK_1002,
...
'/path/to/disk_n' name DISK_100N
--
drop disk
--
DISK_0001,
DISK_0002,
...
DISK_000N
--
-- This is Spinal Tap!
--
rebalance power 11;

Then the same again for +FRA and we were done. Well, I say done, once the rebalance had finished we were done, and the Unix team could then remove completely, the old discs. That did need ASM to be bounced though, which was a bit of a nuisance for the (one) database on the server, but the users were happy to let us take it down.

Job done and very little messing around. Sometimes, it’s helpful to look at the Oracle Manuals before hitting MOS or Google (other web search engines are available – but they are not as good!) for hints when you have new stuff to do.

Yes, I spell disc with a ‘c’ while Oracle spell it with a ‘k’. :-)

A Useful Perl Tutorial

$
0
0

I’m not a Perl developer, I like to be able to read my code after I’ve written it! 😉 However, some people do like Perl and there are even people out there who would like to learn it.

I was recently contacted by a company called Udemy who create tutorials and carry out training courses for various topics – why not give them a look if you need any training. Anyway, they were updating me on their new introductory Perl tutorial. I think it’s well worth a look – if you are that way inclined!

You can check out their web site as well, there is probably something there that might be useful to you.

Cheerio and RIP Mum

$
0
0

Press and Journal announcement

Mum passed away recently. What follows is the full service as conducted by Janet Donnelly of the Humanist Society Scotland. Janet has done a number of funerals for our family in recent years and I’d happily have her do mine when the time comes!

We asked Janet for a loyalty card – you know, pay for 9 get the 10th free – but she was having none of it! 😀

I’m grateful to Janet for allowing me to put her whole service up here on the web for all to see. Thanks Janet. I’ve corrected a couple of bits in the following as the text below is from Janet’s draft version, which is close enough to the final version. (It was a gerbil, not a hamster, and it was a vicious “love child” of a rodent and would chew through the bars of the cage to get you!)

A Celebration of the Life of Jean Martini.
28th March 1939 – 14th August 2015.
William Watsons.
Elgin Cemetery.
11am Thursday 20th August 2015.

Ceremony composed and conducted by:

Janet Donnelly
Authorised Celebrant
Humanist Society Scotland

Sixties music compilation

Good morning ladies and gentlemen, family and friends and a very warm welcome to you all. We have come together this morning to celebrate the life of Jean Martini and to offer one another support in our sorrow.

Jean’s life was a full one. She packed a lot into it and whilst today is of course a very sad occasion, I hope that it will also be an uplifting and positive appreciation of who she was and how she lived.

My name is Janet Donnelly and I’m an authorised celebrant from the Humanist Society of Scotland. It’s my great privilege to have been asked by Jean’s family to lead this celebration of her life today.

Humanism is a philosophy of life which reflects the views of millions of people around the world. We believe that we have one life to live and that we should live that life to the full whilst respecting the views and beliefs of others.

This was very much how Jean lived and her family thought that a Humanist ceremony of celebration would allow them to say their farewells to Jean in a way that felt right for her and for them.

In keeping with the principles of Humanism, our ceremony for Jean won’t have any religious elements but there will be a pause for reflection a little later on which those of you who have a religious faith or belief may wish to use to remember Jean in your own private way.

I didn’t know Jean and so I met with her family and with some of the friends and neighbours who knew her best on Tuesday night so that they could tell me about her. There were tears of course as they reflected on the place that she occupied in their hearts but it won’t come as any surprise to any of you when I tell you that there was a lot of laughter too that night. I left, nearly three hours after I’d arrived with twelve pages of notes – some of which I’ll share with you all just now and some which are definitely best left to be shared later on at The Mansefield over a glass or two or three of wine…….

I’d like to begin with a brief biography of Jean’s life and then there will be lots of memories and recollections of who she was and what made her tick. She was, amongst other things, a very proud mum and granny, a treasured mother in law and a good friend and neighbour and I hope that you all recognise the Jean that you knew and loved in our ceremony today.

Jean came into the world on the 28th of March 1939 at The Wards Dairy in Elgin and she was the eldest of four children for Sandy Stephen who was a milk roundsman and his wife Margaret.

Jean went to school in Archiestown and as a teenager she loved dancing. She even got caught one night climbing out of her bedroom window to go off to a dance – much to her father’s disapproval. On another night she was climbing back in the window when she managed to put her foot right in to a cake that had been baked earlier on for a special occasion.

When she left school, Jean had various jobs – amongst them she worked at Barmuckity, as an egg packer at Brumley Brae and at The Laichmoray as a chambermaid. She used to tell tales from her time at the Laich, of local businessmen who used to chase her round the bed whilst she was trying to get on with her work. She was, presumably too nimble to ever let them catch her.

In 1958 Jean married Joe Dunbar and they went on to have three sons together – firstly Norman, then Gordon and finally Sandy who completed their family.

After 16 years of marriage, Jean and Joe went their separate ways and divorced and then, in the mid-seventies Jean met Enrico at a wedding dance and they hit it off. Enrico and Jean shared a similar outlook on life – they were both warm and welcoming characters who enjoyed life to the full and they set up a very happy home together.

Jean learned to speak fluent Italian and they went often to Italy to stay at their house in San Remo. She knew all of the local dialects and was practically an honorary Italian – haggling at the markets and tasting all of the produce before buying it. Jean used to say that going to Italy wasn’t going on holiday – it was just swapping one kitchen sink for another.

They often took friends with them and Caroline and Jake have lots of wonderful memories of the journey south through France – and this is all the more surprising when you understand that Jean did a lot of the driving despite the fact that she took her test three times and failed…three…times….

Enrico was an accomplished chef of course but Jean too was a wonderful cook and hostess and she loved nothing more than a houseful of people, long leisurely meals and copious amounts of wine to make the evening go with a swing. The alcohol wasn’t just to be found in the glasses though. There are some vivid and some not so vivid memories of the night when there were brandied cherries for dessert and even if you weren’t sozzled when they arrived, you definitely were by the time the coffee was poured.

As Norman, Gordon and Sandy made lives for themselves, Jean became a mother in law and then a granny too. Kathy spent a lot of time with Jean when she and Sandy were together and she credits Jean with teaching her skills like how to cook game Italian style, hanging her sheets out properly on a rope and how to ‘thorough’ a room. Jean was always incredibly house-proud and even when her health began to fail in later life, the house was still spotless.

Jean’s insistence on having her house spick and span, combined with her naturally warm personality meant that running a bed and breakfast was the perfect occupation for her. The house at the bottom of Reidhaven Street, being opposite the railway station was in prime position and Jean was the perfect hostess.

As a granny, Jean loved to spend time with Amber, Michael and Rachel and she used to bake with the girls: rock cakes and cupcakes, and make up games for them all to play. Key in the Tree will go down in family legend, as will Jean’s mini sports days with her own egg and spoon and sack races. At the B&B, Jean would offer guests a tray of tea and biscuits when they arrived and she did the same for her grandchildren – but for them it would be juice in a mug, rich tea biscuits sandwiched together with butter and a kinder egg to complete the treat.

Jean’s neighbour Doreen remembers Jean as being caring and kind – when Doreen had to dash off to the hospital because her daughter was in labour, Jean didn’t think twice about taking in the three students from Shetland who had just arrived to lodge with Doreen. Jean welcomed them in to her home, fed them and watered them until Doreen came home later on. When Doreen’s cat sadly died after being hit by a car, it was Jean who tenderly wrapped it in a towel and then buried it in the garden to save Doreen the distress and upset of having to do it herself.

Seventeen years ago Enrico was diagnosed with Parkinson’s disease and Jean effectively became his carer. They were a devoted couple and there is nothing that she wouldn’t have done for him, or he for her. In 2003 they were married after being together for nearly 30 years.

Several years ago, Jean’s family noticed that she wasn’t herself and she was diagnosed with vascular dementia. It came on gradually but eventually it became clear that she couldn’t manage safely on her own. After falling and breaking her leg, she moved into Abbeyvale and then into the Grove where she was happy and contented. Enrico moved into The Grove as well so they could still be together.

Jean and Enrico still managed to have ‘date nights’ courtesy of The Laichmoray Hotel. They would send over a meal for them and they would have a table laid specially for them so that they could continue some semblance of the life that they had enjoyed before their advancing years slowed them down. The only difference was that the glasses of wine were fewer – much to Enrico’s disgust.

In April 2013, Enrico passed away and Jean continued to be looked after at The Grove. She had lots of visitors who did their best to enrich her quality of life with jigsaws, crosswords and often just a good blether but about three weeks ago, Jean was admitted to Dr Grays with a chest infection which turned to pneumonia.

Again she had lots of visitors but her frail health meant that she couldn’t fight off the infection. She had someone with her round the clock and Sandy was sustained by the love and support of those around him – especially Jean’s sister Phyllis, his partner Jo and his daughter Amber. They all played their part in making Jean’s final days as comfortable as possible – sitting chatting by her bedside, fetching MacDonalds to keep Sandy’s strength up, or filling his squeezy sports bottle with Ribena to sustain him through the night. Amber especially showed enormous stamina – getting up early and spending 11 or 12 hours a day at her granny’s bedside. Jean would have been very proud of her.

Eventually though, Jean’s strength ran out and she died gently and peacefully with Sandy by her side on Friday the 14th of August. She was in her 77th year.

When I went to visit Sandy and everyone else on Tuesday night, everyone had their own personal stories of Jean and what she meant to them. There were far too many to tell just now but I’d like to share just a few which I hope will reflect just what a wonderful lady Jean was.

Jean – as I said earlier enjoyed life and was known on occasion to partake of a glass of wine or two. I saw pictures of her dressed as a chicken – but nobody quite seems to know why she was sporting such a colourful costume.

Chickens also feature in another portion of Jean’s life: one day a random chicken appeared in the garden and roosted in the plum tree. Jean adopted it and named it Brenda. Jean also adopted a baby blackbird in the hot summer of 1976 and to feed it, she had to dig worms out of the garden. The drought meant that the worms were only to be found several feet down but Jean persevered with her spade.

As a mum, Jean always welcomed her son’s girlfriends and was never stuffy or old fashioned. When one of them was visiting in the evening, Jean would go off to bed after asking ‘Will it be one or two for breakfast…?’ One day she took Gordon to one side and said to him seriously: ‘Gordon, I don’t mind you bringing girls home, but could you take home a nice looking one now and again…’!

She was just joking of course – Jean was open minded, liberal and accepting. Her good friend Graham told me that whoever you were, royalty or street sweeper, Jean would treat you all the same. Jean never took sides in her son’s relationships and even when Kathy and Sandy split up, Jean and Kathy stayed close.

On one of their frequent trips to Italy, Caroline, Jake, Enrico and Jean drove away up into the mountains to visit a friend and, as was usually the case, the welcome was warm and the wine flowed. The host asked Caroline to choose a rabbit from the many fluffy bundles that were running about and when she said that she couldn’t because she’d never be able to get it home, the host laughed and told her it would be for the pot for their supper. On the way home, rather the worse for wear, they pulled the car up at the side of a road next to an allotment, Jean peered drunkenly over the fence at the vegetables growing there and asked Caroline: ‘are they cabbages….or are they rabbits……?’ and they all nearly wet themselves laughing. Jean was always good fun and always up for a giggle. (You had to be there!)

As wee boys, her sons wanted a pet but they weren’t allowed one. Undaunted, they hatched a plan and so, on Mother’s Day, they bought their mum a gerbil called Hamish….as you do. Problem solved.

A similar problem was all of the dirty dishes and pans that they had to wash on Christmas Day. Their solution? They bought their mum a dishwasher for Christmas!

Jean had green fingers and her garden was her pride and joy. She taught her grandchildren to grow flowers, she grew veggies and she swapped cuttings with neighbour Doreen. Jean also had a particular fondness for her favourite wheelbarrow which was called into service often.

She hated water and never learned to swim and she could never master the hand eye co-ordination and balance of a pushbike – which frustrated her greatly.

She could be stubborn. When she slipped on the ice and broke her hip, she still managed to walk up to the Lido and back because she didn’t want to make a fuss or bother anyone.

There is so much to say about Jean and it would be easy to remember only how she was in the most recent years but in the prime of her life she was vital and bubbly and vibrant. She was warm and loving but could sometimes be fiery too. She was at the heart of her family and even as her health failed, she was surrounded by people who loved and cared for her. She inspired love and loyalty and I know that there will be many as well outside the close circle of family and friends who will mourn her passing.

I can’t possibly hope to have done her justice here in such a short time but I hope that by sharing some of the recollections and memories that I was lucky enough to hear, I have stirred some of your own memories too.

In the time to come, talk often about Jean and remember the part she played in your life. I know that the next wee while will be tinged with sadness that she’s no longer physically here but I hope that in time, that sadness will be replaced by a sense joy and gratitude that you had the privilege of sharing her life. The writer AC Grayling wrote:

As long as we love each other, and remember the feeling of love we had, we can die without really going away. All the love you created is still there. All the memories are still there. You live on – in the hearts of everyone you have touched and nurtured while you were here. Death ends a life, not a relationship.

And so I hope it is with Jean. She’ll always be a part of you because she made your lives better. That will be her legacy.

She was very much loved and she’ll be very much missed.

I’d like to invite you all now to take a few moments to remember Jean privately in whichever way brings you comfort.

1 minute’s silence.

In a few moments we’ll be moving to Elgin Cemetery where we’ll lay Jean to rest in a short graveside ceremony. Those of you who would like to join us will be most welcome.

Before we move on though, Norman, Gordon and Sandy have asked me to thank you all for coming today to offer your support and for all of the cards and messages of sympathy that they’ve received since Jean’s death. It’s been a great comfort to them to know that they’re in your thoughts at such a sad time.

Over the years there were many people who played supporting roles at different stages in Jean’s life but there are special thanks to the staff at The Grove for looking after Jean and making sure that she was happy and contented, as well as the staff in ward 7 at Dr Grays who went above and beyond the call of duty for Jean and for the family as her life drew to a close. There is an extra special thank you too for Beth who was close to Jean for a long time.

Norman, Gordon and Sandy would like to invite you to join them for tea and refreshments at the Mansefield where they look forward to sharing some of your memories of Jean. There is a lot I haven’t had a chance to talk about so please do go if you can.

Lastly, there’s a collection at the door as you leave if you’d like to make a contribution and the money will go to local charities in Jean’s name.

Thank you all again for coming. Sit for a moment now and remember Jean as we listen to our final song today. Jean loved her sixties music which you heard as you arrived but this is her all time favourite track by one of her favourite artists. Elvis Presley and Love Me Tender.

Music – Love Me Tender.

At the cemetery

Let her be safe in sleep
As leaves folded together
As young birds under wings
As the unopened flower.

Let her be hidden in sleep
As islands under rain,
As mountains within their clouds,
As hills in the mantle of dusk.

Let her be free in sleep
As the flowing tides of the sea,
As the travelling wind on the moor,
As the journeying stars in space.

Let her be healed in sleep
In the quiet waters of the night
In the mirroring pool of dreams
Where memory returns in peace,
Where the troubled spirit grows wise
And the heart is comforted.

We have gladly shared our memories of Jean as we celebrated her life and now we must say our final farewells to that part of her which cannot remain with us. We gather here to pay our final respects and to return her body to the elements that nurtured and sustained her for 76 years.

To our memories we commit:
Jean’s warm and vibrant personality
Her skills as a cook and as a hostess
Her sense of humour
Her openness
Her love for her family and her friends
And the happiness she found with Enrico who was her best friend and her soulmate.

We are glad that she lived and that we knew her
We honour her life, accept her departure and cherish her memory.
We have enjoyed her company and the times we shared,
Now in peace and thoughtfulness we bid her farewell.

Thank you all once again for coming ladies and gentlemen. Please do go if you can to The Mansefield and in the time to come talk often about Jean’s place in your hearts. In that way she’ll always be a part of who you are.

I’d like to leave you with a few words from the Native American tradition which may help you in the time to come.

May the stars carry your sadness away
May the flowers fill your hearts with beauty
May hope forever wipe away your tears
And above all, may love make you strong.

Thank you.

The photo shows Alison and Jean.

Jean and Alsion


HPLIP Stops Working After Linux Mint 17.2 Upgrade

$
0
0

I was able to use HPLIP with Mint 17.1 but when I upgraded to 17.2, I had the problem where attempting to run the utility caused nothing at all to happen. The solution:

sudo apt-get install hplip-gui

Works perfectly for me now.

Getting Arduino Working from a Windows 7 VirtualBox Guest

$
0
0

Do I like problems or what? :-) I’m running Linux Mint 17.2 as my host, and I have a VirtualBox 5.0 VM running Windows 7 Professional. I decided I’d like to be able to run the Arduino software from within the VM, but not talking to an Arduino, but to a bare bones setup and programming AtTiny85 devices. The following might be of use to other people’s needs as it explains how the FDTI device cane be automatically assigned to the VM rather than to the host, when plugged in and the VM is running.

Set up the Host First

There are two things you need to do on a Linux host to enable the USB deviced for the guest(s). The first is to ensure that the user you normally run VirtualBox under is a member of the vboxusers group, if not, add the user and logout and back in again. (You don’t need to reboot the machine, just logout of your user and back in again.

$ groups  
adm dialout cdrom sudo dip plugdev lpadmin sambashare

$ sudo usermod -a -G vboxusers your_user_name

Now logout and back in again.

$ groups
adm dialout cdrom sudo dip plugdev lpadmin sambashare vboxusers

We can see that the vboxusers is now active.

The second task that needs doing is to discover the FDTI device’s vendor, product and serial numbers.

$ lsusb
...
Bus 007 Device 007: ID 10c4:ea60 Cygnal Integrated Products, Inc. CP210x UART Bridge / myAVR mySmartUSB light
...

The above is fine, the main ID field shows that the id of this particular device is 10c4:ea60 with is the vendor id and the product id, both of which should be unique. There is no serial number though, and that might be needed if there are more than one revision of the device. Try this:

$ VBoxManage list usbhost
...
UUID:               9f963985-1fb7-49e0-b066-e1261a6a1985
VendorId:           0x10c4 (10C4)
ProductId:          0xea60 (EA60)
Revision:           1.0 (0100)
Port:               0
USB version/speed:  1/Full
Manufacturer:       Silicon Labs
Product:            CP2102 USB to UART Bridge Controller
SerialNumber:       0001
Address:            sysfs:/sys/devices/pci0000:00/0000:00:1d.1/usb7/7-1//device:/dev/vboxusb/007/007
Current State:      Captured

So, we have serial number 0001. We can now set up a VirtualBox USB filter to cause that particular FDTI device to be connected directly to the Windows 7 guest when it is running and the device is plugged in. Note that we must note down the vendor and product ids exactly as they are displayed, not the uppercased version in brackets after it!

Create a VirtualBox USB Filter

The Windows VM needs to be closed, for best results. The FDTI device should also be plugged in.

  • Run the VirtualBox manager and click on the Windows VM on the VM list on the left.
  • Click settings to open the settings dialogue for that particular VM.
  • Scroll down and select the USB setings on the left.
  • On the right, make sure that USB 2.0 (EHCI) Controller is selected. I’ve found that setting the USB 3.0 controller fails with my laptop and the Windows guest complains that it has no USB drivers. I don’t have USB 3 on my laptop, so 2 it is for me!
  • Click the second button down on the far right, it’s a USB plug with a ‘+’ on it. This adds a new filter.
  • Select the FDTI from the list that appears.
  • Right-click it, and select edit. Make sure that product and vendor Ids, and the serial number match. The other details can be either left as is, or ignored. Be aware that anything in any of these fields must match the devices information exactly. OK back out of this dialogue. If no devices appear in the list, your user is not in the vboxusers group, or it is not yet active.
  • Make sure that the filter is enabled by checking the checbox to the left of the name.
  • Start the VM.

If the device is unplugged when the VM is subsequently started, it won’t matter. With the filter in place, plugging the device into a USB slot will cause it to be assigned to the guest and not to the host.

Install Arduino

In the Windows VM, open Firefox – other browsers are available. Internet Explorer is not a browser 😉 – and navigate to https://www.arduino.cc/en/Main/Software and download the software, 1.6.5 was the latest version at the time of writing. Make a donation, if you wish. When the software has downloaded, install it. It will attempt to install a few USB drivers and such like, make sure that you allow it to do so. Windows is so naggy!

You may have to reboot Windows to make the new drivers stick. Sigh.

Startup the Arduino IDE and configure it for your Arduino board (Tools->Board), Port (Tools->Port) and programmer (Tools->Programmer). Make a note of the port. Open the ubiquitous blink sketch in the usual manner, and upload it. If it uploaded then well done, you are now able to program Arduinos from within a VM.

In my case, it failed as it couldn’t open port COM1, it said that the port could not be found. Running device manager (start, search, type device, select device manager from the list) and scrolling down to COM and LPT ports showed that only LPT1 was present.

Install a Proper Driver

Go directly to http://www.silabs.com/products/mcu/Pages/USBtoUARTBridgeVCPDrivers.aspx where you can download a driver to make FDTI devices work on just about every version of Windows. Select the appropriate version for your guest operating system. You will download a zip file.

Find the zip file, and extract it, then change into the folder that was created (CP210x_VCP_Windows\CP210x_VCP_Windows in my case) and run either the 32bit (x86) or 64 bit installer depending on your guest. Mine was 64 bit, so I executed CP210xVCPInstaller_x64.exe.

Guess what? Another reboot is required. Windows sucks!

After the reboot, open the blink sketch again and go to tools->port, your com port should now appear in the list, mine was deemed to be COM3, so I chose that.

Click upload, and be amazed as your Arduino starts working!

AtTiny85 Devices

Need to program these? Or the AtTiny25? Or AtTiny45? See how to set up Arduino IDE 1.6.x at http://highlowtech.org/?p=1695 where all is explained. It just works.

If anyone is interested, I’m connecting my FDTI device to an Arduino Uno lookalike (actually, it’s a Shrimp, see http://shrimping.it/blog/, built on a breadboard style strip board) with an AtMega328 running the ArduinoISP program. My programmer in the IDE is Arduino as ISP and not ArduinoISP – confusing or what – I’ve installed the AtTiny stuff as per the link above. It all just works.

Oh, and one other thing, the AtTiny25/45/85 has PWM on port 4 as well as ports 0 and 1. The diagram on the above link isn’t quite correct, but I’ve informed them of this.

Have fun.

Linux Mint (Cinnamon) Not Showing Programs on Task Bar.

$
0
0

For some strange reason, my Linux Mint Cinnamon desktop suddenly stopped showing the running program buttons on the task bar. I could only switch between running programs with ALT+TAB.

It’s easy when you know how, but for those who don’t, this is what you do:

  • Right click on the blank task bar.
  • Select the “Add to Panel…” option. (Or similar depending on your version of Mint.)
  • Type “Window” into the search box.
  • Select “Window List” from the subsequent list.
  • Click “Add”.
  • Job done!

A Couple of Useful Linux & HP-UX Tricks

$
0
0

Recently at work, I was on an HP server and needed to grep -B3 a large log file to find the three lines prior to a number of Oracle error messages I was searching for, in order to fix things. It turned out that HP-UX doesn’t have the -B (or the -A) options. Bummer. A bit of awk fixed the former, but the latter I leave as an exercise for the reader as they say!

awk '/^ORA-/ {for (i = 1; i <= x;) print a[i++]; print; print "---"} {for (i = 1; i < x; i++) a[i] = a[i + 1]; a[x] = $0;}'  x=3 file_name

In my case, setting x=3 allowed me to display the three lines prior to the error line which itself began with ORA-.

In use, do this:

  • Change the search term between '/' in the above, to look for your text.
  • Set x to however many lines before the found text you wish to display.
  • Pass the file name to be searched as the last parameter of the command.

That's it. Simple, as they say! You can easily build the above into a shell script and simply pass it the three parameters for x, the search text and the file name. Saves getting all those punctuation characters in the right place!

Dos2Unix for Servers That Don't Have it Installed

Dos2Unix is very handy when you have to use a secure file transfer system - sftp, scp etc - to send text files from Windows to a Unix server. Because the servers are securely locked down, unlike your home PC, you cannot simply install packages as and when you like.

The problem with the secure protocols is that they send in binary, so Windows text files get transferred with the CR/LF end of line characters intact, which is irritating when you subsequently try to edit the file in vi (emacs is also not installed!)

To fix the problem, you could edit the file manually and remove the visible ^M characters from each line. You could but why would you, sed and vi allow you to do it in one command:

:1,$s/^M//g

Of course, if you simply type in the two characters shown above, nothing will be replaced. However if you enter them as:

CTRL-v CTRL-m

Where the above means press the CTRL key and while holding it down, press the v (or m) key.

Now it works!

Tnsnames.ora Parser

$
0
0

Have you ever wanted to use a tool to parse the manually typed up “stuff” that lives in a tnsnames.ora file, to be absolutely certain that it is correct? Ever wanted some tool to count all the opening and closing brackets match? I may just have the very thing for you.

Download the binary file Tnsnames.Parser.zip and unzip it. Source code is also available on Github.

When unzipped, you will see the following files:

  • README – this should be obvious!
  • tnsnames_checker.sh – Unix script to run the utility.
  • tnsnames_checker.cmd – Windows batch file to run the utility.
  • antlr-4.4-complete.jar – Parser support file.
  • tnsnames_checker.jar – Parser file.
  • tnsnames.test.ora – a valid tnsnames.ora to test the utility with.

The README file is your best friend!

All the utility does is scan the supplied input file, passed via standard in, and writes any syntax or semantic problems out to standard error.

Working Example

There are no errors in the tnsnames.test.ora file, so the output looks like the following:

./tnsnames_checker.sh < tnsnames.test.ora

Tnsnames Checker.
Using grammar defined in tnsnames.g4.
Parsing ....
Done.

Non-Working Example

After a bit of fiddling, there are now some errors in the tnsnames.test.ora file, so the output looks like the following:

./tnsnames_checker.sh < tnsnames.test.ora

Tnsnames Checker.
Using grammar defined in tnsnames.g4.
Parsing ....
line 5:12 missing ')' at '('
line 8:16 extraneous input 'address' expecting {'(', ')'}
Done.

You can figure out where and what went wrong from the messages produced.

Have fun.

Oracle RAC – Flashback Database

$
0
0

It was a simple enough request, flashback this particular database to a guaranteed restore point. What could possibly go wrong?

Database names etc have been changed to protect the innocent, and me!

$ sqlplus / as sysdba

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount
ORACLE instance started.
...
Database mounted.

SQL> flashback database
  2  to restore point GRP_2014_05_17_13_10;

However, this raised the following error:

ORA-38748: cannot flashback data file 1 - file is in use or recovery
ORA-01110: data file 1: '+DATA/XXXXXX/datafile/system.269.759338709'

A minor panic then ensued! It’s a production database, and I have limited downtime allocated!

In the back of my brain, I thought “maybe it’s a RAC database”?

SQL> show parameter cluster

NAME                       TYPE        VALUE
-------------------------------------- -----
cluster_database           boolean     TRUE
cluster_database_instances integer     2

SQL> show parameter instance_name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
instance_name                        string      XXXXXX_1

Ok, It is RAC, there are two instances, and I’m on instance XXXXXX_1. It is obviously still in use by the other instance which will be XXXXXX_2 using the naming conventions for this system. I need to bring it all down. The easiest way is to use srvctl:

SQL> exit

$ srvctl stop database -d XXXXXX -o immediate
$sqlplus / as sysdba

SQL> startup mount
ORACLE instance started.
...
Database mounted.

SQL> flashback database to restore point GRP_2014_05_17_13_10;
Flashback complete.

SQL> alter database open resetlogs;
Database altered.

All I need to do now is start the other instance:

$ srvctl start instance -d XXXXXX -i XXXXXX_2

Job done, and still within the downtime!

And here’s a quick tip. Because the database is effectively right back at 17th May 2014, the guaranteed restore point and its numerous archived and flashback logs, which are taking up space in the FRA, can be dropped and recreated to save FRA space:

SQL> drop restore point GRP_2014_05_17_13_10;
Restore point dropped.

SQL> create restore point GRP_2014_05_17_13_10 guarantee flashback database;
Restore point created.

RMAN Error ORA-15028: Archived Log Not Dropped.

$
0
0

The following error popped up in an RMAN backup which was attempting to delete archived logs that had been backed up twice, at least, and were created more than two days ago:

RMAN-03009: failure of delete command on default channel at 09/12/2014 08:43:50
ORA-15028: ASM file '+FRA/MY_DBNAME/archivelog/2014_09_09/thread_1_seq_35804.3258.857840113' not dropped; currently being accessed

(Database names changed to protect the innocent, as usual.)

David Marcos has a blog entry from September 2010 on this very matter at http://davidalejomarcos.wordpress.com/2010/09/07/unable-to-delete-archive-log-and-fra-is-filling/ which suggests killing the various arc processes for the database in question, one by one. Oracle will restart them and the database will stay up.

Now I don’t know about you, but in my case, this was a production database and I have a certain trepidation about killing off background processes at random in the hope that it will cure a fault. That sort of approach may work for Windows faults and problems, but this is a real server, running under Unix. 😉

In the asmcmd shell at Oracle version 11.2 there is the useful lsof command that will return details of who, or what, has a file opened, but in my case, the version was only 11.1, which doesn’t have anything much in the way of useful commands!

I decided, in the absence of lsof to search the alert log for the archived log’s filename to see if any of the arc processes recorded having had a problem with the file. They didn’t, but I did find the following:

LOGMINER: Begin mining logfile ... +FRA/MY_DBNAME/archivelog/2014_09_09/thread_1_seq_35804.3258.857840113
LOGMINER: Begin mining logfile ... +FRA/MY_DBNAME/archivelog/2014_09_09/thread_2_seq_37567.2138.857839917
LOGMINER: End mining logfile ... +FRA/MY_DBNAME/archivelog/2014_09_09/thread_2_seq_37567.2138.857839917
LOGMINER: Begin mining logfile ... +FRA/MY_DBNAME/archivelog/2014_09_09/thread_2_seq_37568.2810.857841217

From the above, it is plain to see that it is most unlikely that the arc processes would have been the culprits here. One of the other DBAs had started a log mining session for a few files, but had not ended the session for two of them.

Getting the DBA to run a quick DBMS_LOGMNR.REMOVE_LOGFILE(...) and/or a DBMS_LOGMNR.END_LOGMNR resolved the problem. Thankfully, the session running the log mining was still open, I am not sure what would happen if the session had already been closed, perhaps a full database restart would have been required – but as Wikipedia often says, “needs validation”.


Beware of the Silent Database Killer!

$
0
0

There is, out there in Oracle Land, a silent database killer. You never know when it will strike and it affects all databases right up to and including 12c. When it strikes, it does so silently, there is no evidence of its passing, until it is far too late.

What is This Killer?

The database killer is any code which runs in NOLOGGING or UNRECOVERABLE or, in some cases prior to 11g, DIRECT PATH loads.

The following are examples of these sorts of commands:

  • INSERT /*+ Append */ INTO … (11g is not affected by this.)
  • Any DML after ALTER …. UNRECOVERABLE;
  • Any DML after ALTER …. NOLOGGING;
  • CREATE … UNRECOVERABLE;
  • CREATE … NOLOGGING;

Alternatively, using SQL Loader with any of the following parameters in the control file:

  • UNRECOVERABLE
  • OPTIONS(DIRECT=TRUE)

Or, running SQL Loader with the following command line option:

  • sqlldr direct=true

Useful MOS Documents

The following documents will prove very useful if you are affected by this silent killer.

  • 290161.1 : The Gains and Pains of Nologging Operations.
  • 269274.1 : Check For Logging / Nologging On DB Object(s).
  • 751249.1 : Dbv-111 Ora-1219 Sys.X$Dbms_dbverify. (Or, in English, what to do when DBV on a standby data file throws an OCI error ORA-01219.)
  • 472231.1 : How to identify all the Corrupted Objects in the Database with RMAN.
  • 605234.1 : How to Copy ASM datafiles from Primary to Standby Database on ASM using RMAN.

Prevention is Better Than Cure

The following command must be executed on the primary and standby databases, if you have any.

alter database force logging;

Downtime is not required and if there is any of the NOLOGGING work in progress, the ALTER DATABASE will hang until such time as the NOLOGGING work completes. A message will be logged to the alert.log advising you of this.

The command executes quite happily on a physical standby database without the need to cancel recovery.

The result of the above command is that any attempt by an SQL operation to attempt a NOLOGGING operation will be ignored, and full logging will take place, resulting in the safety of your data and the continuing viability of the standby database(s).

How Can I Tell if I’m Infected?

The following SQL statement will hopefully return no rows. However, if there are any rows returned, those are the data files that have been updated at some point in the past, with a NOLOGGING operation of some kind. They are sitting there, silently, waiting for an excuse to kill your database.

alter session set nls_date_format = 'dd/mm/yyyy hh24:mi:ss';

select file#, name, unrecoverable_change#, unrecoverable_time
from v$datafile
where unrecoverable_change# <> 0
order by file#;

The V$DATAFILE view only holds the most recent unrecoverable change details against each datafile. There may have been others in the past, but the one you see is the most recent.

No rows selected is good, but sometimes, you may see something like this:

FILE# NAME                                UNRECOVERABLE_CHANGE# UNRECOVERABLE_TIME
----- ----------------------------------- --------------------- -------------------
    6 +DATA/orcl/data/users.293.825959933        10917504423565 13/10/2014 15:00:34

As usual, database names etc have been changed to protect the innocent!

It makes no difference if the files are in ASM, as in this example, or on file systems, the problem is the same and needs to be attended to urgently.

If you find any data files with an unrecoverable change, as above, then the most obvious thing to do is immediately take a full backup of the database (or just the affected data files) because if you have to restore and recover the affected data files, that is the time when the corruptions are introduced into the primary database. The standby database, on the other hand, is already dead – it just doesn’t know it yet.

To check if your primary database is also dead, and to save you backing up a potentially dead primary, you can run the following in RMAN – there’s no need to connect with the catalog, if you use one:

backup validate check logical datafile 6;

If you see a non-zero number in the section entitled “Marked Corrupt” then it means that at some point since the unrecoverable change was applied to the primary database, it has been restored and recovered, and the data that was loaded is now missing. This database is mortally wounded – unless you know what data needs to be reloaded and you will need to “uncorrupt” those affected datafiles by restoring or recreating them, and their contents.

If your database has a standby, then the standby is now not viable to be used as a primary. All the data that have been loaded into the primary database using NOLOGGING, or similar operations, has never been loaded into the standby database. If you detect any “corrupt” data files on the primary database, as above, with a date & time later than the standby database’s creation date & time, then you will need to rebuild or repair the standby database. You can check the CREATED column in V$DATABASE to determine the standby database’s creation details.

You will be able to run a data guard switch over, either manually, or with OEM or DGMGRL, without error. When the current standby comes up as the new primary database, there will be missing data. If the application and/or developers/vendor continues to run NOLOGGING data loads, then the amount of data loss simply increases. When you switch back to the old primary at some point, everything will be in a mess – and you will not know!

How big a mess?

  • There is the data originally loaded into the old primary, that is not present on the old standby – now the new primary – but which is present on the new standby.
  • There is also now, the data being loaded into the new primary, that is not being copied to the new standby (the old primary) – so both databases are missing some data.

If you switch back and forthe a few times, the mess just keeps getting messier!

So What’s Going On?

Some vendors and/or developers, and possibly even the odd DBA, have read in the manuals that using NOLOGGING, UNRECOVERABLE or DIRECT PATH operations can “save time” or “improve performance” by not logging the data changes to the redo logs for the actual data. Changes to the data dictionary will still be logged and transferred to the standby databases.

These same people, however, appear to completely ignore the documentation where it says that “whenever you use a NOLOGGING operation, you must take a full database backup” immediately afterwards.

This is a problem. If you load a table with millions of rows in this manner, the table may extend by adding extents. These new extents will be recorded in the dictionary and will match the actual data usage of the table. The redo logs, however, will only record the changes to the dictionary. The standby database will update its dictionary with the details, but the table on the standby will not be updated with either the data or the new extents – it will not change it’s row or extent count at all.

The manual states that these operations should only be used on objects that are not required to be recovered. You might have a table that is simply used as the temporary source for a data load before it is transformed and loaded into the correct, final tables. You don’t care about recovering the data when it was temporary to begin with. However, the database is not a mind reader and doesn’t know when you create a temporary table, use it, and the perhaps drop it, that that object was never required to be recovered. The standby datafiles will be flagged corrupt and the primary datafiles will still log an unrecoverable change.

As long as the primary database continues to run happily, the data thus loaded, can be manipulated at will in the normal manner.

If the primary database is ever restored and recovered using the archived redo logs created by the data loads, no errors or warnings will be displayed, but the previously loaded data will not be present afterwards. Attempting to access the data after a restore and recover will result in an error similar to the following:

ORA-01578: ORACLE data block corrupted (file # 6, block # 139)
ORA-01110: data file 6: '+DATA/orcl/data/users.293.825959933'
ORA-26040: Data block was loaded using the NOLOGGING option

A similarly nasty error will occur when the current standby database is opened read only, or switched over to become the new primary, but remember, the error only becomes apparent when the data are accessed in some way – this is really nasty!

How to Determine if You Are Affected

On the primary database, you can run the dbv utility against all the data files. Use the userid parameter if the data files live in ASM:

dbf file='+DATA/orcl/data/users.293.825959933' userid=sys/password

DBVERIFY: Release 11.2.0.4.0 - Production on Sat Oct 25 15:03:44 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

DBVERIFY - Verification starting : FILE = +DATA/orcl/data/users.293.825959933

DBV-00201: Block, DBA 25165963, marked corrupt for invalid redo application
DBV-00201: Block, DBA 25165964, marked corrupt for invalid redo application
DBV-00201: Block, DBA 25165965, marked corrupt for invalid redo application
DBV-00201: Block, DBA 25165966, marked corrupt for invalid redo application

DBVERIFY - Verification complete

Total Pages Examined         : 16
Total Pages Processed (Data) : 8
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 7
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 1
Total Pages Marked Corrupt   : 4
Total Pages Influx           : 0
Total Pages Encrypted        : 0
Highest block SCN            : 0 (0.0)

The above shows that 4 pages (aka blocks) are marked corrupt. This is what you will see on a primary that has been restored and recovered. You will not see any corruption if the primary has not been restored and recovered, the data are still present in that case.

Checking the standby is equally as simple as non-corrupt data files will happily dbv. However, any data file that is corrupted will cause dbv to abort with then following errors:

DBV-00111: OCI failure (4157) (ORA-00604: error occurred at recursive SQL level 1
ORA-01219: database not open: queries allowed on fixed tables/views only
ORA-06512: at "SYS.X$DBMS_DBVERIFY", line 22

If you see this, then your standby is invariable corrupt, however, to be absolutely certain, and you can do this on the primary as well – it’s quicker than dbv by the way – use RMAN to do a pretend backup with a CHECK LOGICAL clause:

backup validate check logical datafile 6;

Note: From 11g onwards, the backup part of the command is not required:

validate check logical datafile 6;

A corrupt data file will throw up results similar to the following:

List of Datafiles
=================
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
---- ------ -------------- ------------ --------------- ----------
6    OK     4              50           12801           7251912

  File Name: +DATA/orcl/data/users.293.825959933
  Block Type Blocks Failing Blocks Processed
  ---------- -------------- ----------------
  Data       0              8
  Index      0              0
  Other      0              12742

The Marked Corrupt column looks interesting, and shows that this data file, on the standby, is indeed corrupt. The data that should be present, is not and this is what silently makes our standby database completely and utterly useless.

We can check the extent and reasons for the corruption after an RMAN check by reading from V$DATABASE_BLOCK_CORRUPTION in SQL*Plus:

select * from v$database_block_corruption;

     FILE#     BLOCK#     BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
         6        139          4            7251895 NOLOGGING

This shows that there are 4 blocks, beginning at block 139, in data file 6 which have had NOLOGGING operations applied. If there are numerous corruptions, the following might be a better advisory query:

select file#, corruption_type, count(*)
from v$database_block_corruption
group by file#, corruption_type;

     FILE# CORRUPTIO    COUNT(
---------- --------- ---------
         6 NOLOGGING         1

If you need to find out what objects are corrupted, the following might be useful, you will need to plug in the starting block numbers from the query above.

select owner, segment_type, segment_name
from dba_extents
where file_id = 6
and 139 between block_id and block_id+blocks;

OWNER                SEGMENT_TYPE         SEGMENT_NAME
-------------------- -------------------- --------------------
NORMAN               TABLE                TEST

Rebuilding The Standby Database

If you have detected corruptions then your standby databases are useless and need rebuilding. Oracle advise that only the affected data files need to be rebuilt, which is great if the database is huge and only a subset of the files are corrupt. This is documented in MOS note 605234.1, but I have found that it doesn’t appear to work.

I have, however, worked out the problem and a suitable workaround until the bug gets fixed. I have raised an SR on this matter.

Basically what happens is that the first switch datafile to copy command works correctly. The second one which should switch to the latest datafile copy does not, and switches back to the previous data file, the corrupted one.

The work around is to carry out the first switch, list the copies of the affected datafile, and delete the copy that was the original corrupt file.

The process starts on the primary with an RMAN backup of a non-corrupted data file. There’s no point running this rebuild if the primary data files have been restored and recovered, the data will be missing there too, so check first as advised way back near the start of this article.

copy datafile '+DATA/orcl/data/users.293.825959933' to '/tmp/users.dbf;
exit

This file, /tmp/users.dbf should be copied over to a safe place on the standby server. If you use ftp then remember to transfer in binary. Scp or sftp will only do binary transfers.

Once the file exists on the standby, we can use RMAN to get the file copied into ASM and used by the standby:

catalog datafilecopy '/tmp/users.dbf';

sql "alter database recover managed standby database cancel";

switch datafile 6 to copy;                  # Now running on the file in /tmp.
backup as copy datafile 10 format '+DATA';  # Get a copy of the /tmp file into ASM.
switch datafile 6 to copy;                  # Should switch to the new file, but doesn't.

# Insert workaround here. See text.

sql "alter database recover managed standby database using current logfile disconnect";

As mentioned above, doing all this and running another validate check logical command, still shows corruptions on the standby, even though there are none on the primary. This is because the second switch datafile to copy command actually switched back to the old corrupted file instead of the new one. To get around this problem, follow these steps:

  • Switch to the /tmp copy as above.
  • Backup the /tmp file into +DATA as above.
  • Switch datafile n to copy; again. This is now using the corrupt file again.
  • List copy of datafile n; will show the /tmp file and the new backup in ASM.
  • Delete copy of datafile n tag "tag for the /tmp copy"; will delete the /tmp file leaving only the new file in ASM – the one we want.
  • Switch datafile n to copy; yet again! This is now correctly using the latest uncorrupted data file in ASM.
  • List copy of datafile n; will show the old corrupt file as the sole remaining copy.
  • Delete copy of datafile n; will get rid of the corrupt file.

So, a bit of mucking about to get the problem worked around, and you may need to do a lot more mucking about if you have other copies of the same datafile in RMAN, to get the correct new file switched into use and to get rid of the unwanted corrupt file. I didn’t have this problem as my own backups are done as backupsets, not copies.

Don’t let a silent killer destroy your databases, keep force loggging turned on – it’s sometimes the only defence against half informed vendors, developers and the odd DBA! 😉

HP Printer Ink – WTF?

$
0
0

Running a business, I like to keep a small stock of spare printer ink cartridges, so I usually have a couple of spare colour and a couple of black ones, just in case. However, after a recent cartridge change, one black and one colour at the same time, the printer has suddenly stopped working. This was after about a month of perfect usage, not immediately after the change. The printer is an HP Photosmart 2610 All in one – and now, it’s an HP Photo-not-very-smart-at-all 2610 none in one!

2 November 2015 – Updated, see (far) below!

So, what went wrong? Who knows, on a day when I needed to print a few invoices for posting and do some scanning ready for a new contract, the machine came up with an error alert – “Remove and check color cartridge”. So I did, not much to check really, but these were new around a month ago, so they should be ok, right? No joy at all. On replacing the cartridge, the same error appeared on the display panel.

Checking the internet, I could see that I was not alone. On the HP forums an employee of HP stated quite categorically that rumours of there being an expiry date on HP genuine cartridges were simply a myth and while there was a date on each one (which was news to me) that was nothing more than a warranty date, and, the cartridges do not expire. Good news indeed because my cartridges appeared to be dated March 2014 which is more than a year ago – I wonder where I bought these from because as far as I’m aware, I bought them fairly recently at least in the last 6 months. Still, they don’t expire do they?

Anyway, long story short, I replaced the cartridge with another new one from the same 2 pack. Oh joy, same error. I tried the HP support web site for the UK and selected to “chat with an expert”, and after filling in all my details and my problem, the web site failed. Sigh! One refresh later and I had to fill it all in again. On hitting submit, it failed again with another database error. I gave up and turned to Twitter to let @HPSupport know. I almost immediately got a reply asking what was up and that the web site was fine.

After numerous back and forth tweets, Teri must have gone home for the evening and Eric took over. None of them mentioned anything about expiry dates even though I’d given them all the details. It has to be said that both were very helpful. In the end, Eric gave me a phone number to try (Monday to Friday 08:00 until 17:00 it seems – that’s no good to me!) I want help RIGHT NOW!

Eventually, I managed to find a couple of PDFs of the files I needed to send off, but I had to use CamScanner on my phone to scan my passport and bank accounts etc for the new contract. I still haven’t got my invoices printed out!

Come Saturday, and I purchased a brand new colour cartridge (334) from PC World, and fitted it. Hooray, no more error messages telling me to “remove and check color cartridge”. Now the error is “Insert black or Photo Print cartridge in right stall” – For the love of f*ck! I have a two pack of 339 black cartridges, which I’ve had for a wee while, not too long though, and I opened one of those. The date? August 2015. Remove all the protective bits and slip it in. Same. F*cking. Message!

So, I’m now rather annoyed at my printer, and at HP in general. It appears that the so called warranty date on each cartridge is actually some sort of expiry date, contrary to what was written on HP’s own forums, by an HP employee (admittedly, his disclaimer was that he was not speaking for HP). I now appear to have a stock of expensive genuine HP cartridges that are totally unusable, plus, the whole so called all in one, is sitting there like a pile of junk because none of the features of the machine will work until I get another new black cartridge fitted into the damned thing. I want to send a fax? No chance. Scan something? Ha ha ha!!

According to HP’s support site, somewhere, I can remove said faulty colour cartridge and print in mono – which had been working until this tale of woe began – but it appears not. The printer simply sits there begging me to “insert a color cartridge’. Sigh.

Given that I have, on occasion, needed to print photos out, and I have a Photo Print cartridge, I tried that too, no joy, the printer doesn’t want to know. Mind you, that’s definitely an old one, I checked the date and it’s September 2006, but what’s the point of having these removable and re-sealable cartridges if the damned things stop working for no bloody reason other that the fact that they have gone past the date stamped on the front?

So, I have in my collection a pile of brand spanking new cartridges, which were not cheap, are mostly unused and have cost me £55.99 for the two 339 black ones, £61.99 for the two 334 colour ones and £28.99 for the Photo Print colour one. Not including the £25.00 I paid in PC World for a new 334 Colour just to make sure that the printer was fine! £175.97 of my money that is now wasted.

HP, I am not happy with you. This is no way to treat your customers and if it turns out that my printer is fine, but there is an expiry date on the cartridges (ie, not a warranty date), them I’m looking for a refund and/or replacement for all of the above. Somehow I doubt that much will be forthcoming, but given that nothing is mentioned about this in any part of the packaging or warranty information – which is printed in such tiny lettering, it’s almost impossible to read – I don’t see that you have any reason to refuse.

Plus – because I don’t ask for much – how dare you build software into your devices that prevent me from using other parts of the so called all in one, when there is no printer ink present. At least, when the software thinks there is none, there is actually a pretty much full set of blank and colour inks thank you very much. Surely it doesn’t take a rocket scientist to write software that would prevent me from using the copier or the printer if I did truly have an out of ink problem, but would let me use the fax and/or scanner, which don’t actually need ink.

And as for preventing me from using my bought and paid for, genuine HP cartridges just because some arbitrary date has been and gone is probably against one or more of the laws of the EU. I’m thinking that I need an update to the firmware on this device to remove said restrictive practices. Failing that, my next printer will not be an HP one.

2nd November 2015 – Update

Oh well, today is Monday and I’ve once more returned to the fray. It seems that there have been numerous problems with any number of HP’s printers/All in ones etc, whereby the tension spring that holds the cartridge in place (I assume) have come loose as a result of a tiny little – about 3 by 6mm (or 1/8″ by 1/4″ for the folks in the US) – plastic tab breaking off.

I decided to have a look and found that while the colour cartridge was fine, the black cartridge’s wire was indeed hanging loose and the plastic tab was lying in the depths of the machine – where all the overflow ink goes. I’m not delving in there! There are new life forms evolving in the primordial gloop that’s in there!

The workaround is to glue up the spring and hope that it has not punched holes in the rather delicate mylar circuit board behind where the cartridge sits. I think mine is ok, but there is a mark near the bottom of the black cartridge’s mylar strip that isn’t present on the colour side. Here’s hoping it’s not serious.

These printers apparently, even while on sale and under warranty, do not have spare parts or any other way of replacing bits – they appear to be build down to a price, and are simply expected to be replaced when they develop a fault. Another WTF!

I would rather pay decent money for something that is substantially built, lasts and works – and can be repaired with spares etc – than buy a cheap printer that will most likely end up in a landfill site at some point. Unless, of course, I’ve gutted it for the stepper motors and silver steel guides etc!

I’m waiting for the Superglue to cure even as I type! Wish me luck.

2nd November 2015 – Update 2

Ok, Superglue, as advised on fixyourownprinter.com , doesn’t work. I didn’t think it would.

I’m on Araldite Rapid now. I should know in an hour or two if this has been successful! If not, the guts of this printer might end up as a small CNC machine of some kind!

2nd November 2015 – Update 3

Araldite Rapid has worked. After leaving it to cure for an hour, I have a printer that works again. It reports a failure to align the heads after replacing the cartridges, but it prints text and graphics fine, so far, so I shall not be worrying.

Conclusions

  • HP appears to have a design fault with the retaining springs – they break out of the plastic tabs and trash the mylar circuit connectors – I was lucky, my mylar was only slightly nicked, so I might have got away with it.
  • HP were taken to court in the USA for their cartridge expiry “feature” and, as far as I can see, they lost and had to remove it from the printer firmware. (Sadly, I can’t get back to the web site with the details!)
  • The dates on the cartridges might be a warranty date, but it might be that after 4 years from that printed date, they will expire anyway. Time will tell – that’s its job after all!
  • Having a fault with the black cartridge retaining spring, displays a message that the colour cartridge is the one at fault. Sigh!
  • Having a printer resources problem, or perceived problem, causes the other features of the all in one to be unusable. A minor software problem, who would have thought? Scanning doesn’t need ink, neither does sending a fax – at least write the software to allow those to be used.

So, the question has to be, are there any decent long lived and repairable printers out there in the world any more?

UTL_FILE Operation fails with ORA-29283

$
0
0

A process that called ”UTL_FILE” was failing in the test system, but worked fine with exactly the same set up in production. Why? The error was ORA-29283: invalid file operation. How do we find out exactly why it was failing?

MY_DIRECTORY is a directory, owned by SYS with READ and WRITE privileges granted to a schema that uses it to create, write and read files in that location.

The oracle account on the server can create and read files in the directory location, touch and cat prove this.

Running a PL/SQL package, however, fails. The failing code was reduced to the following test sample:

declare
 v_fd utl_file.file_type;
begin
 v_fd:=utl_file.fopen('MY_DIRECTORY','norman.txt','w');
 utl_file.fclose(v_fd);
end;
/

Which blows up with the less than helpful message:

ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 536
ORA-29283: invalid file operation
ORA-06512: at line 4

Here’s a nice trick, stolen blatantly from Michael Schwalm at http://blog.dbi-services.com/troubleshooting-ora-29283-when-oracle-is-member-of-a-group-with-readwrite-privileges/ which shows how to actually see what the real underlying problem is for this exception.

SQL> -- Change define, we need to use an ampersand.
SQL> set define #

SQL> -- Get my current process ID into a variable.
SQL> column spid new_value unix_pid
SQL> select spid from v$process p 
  2  join v$session s on p.addr=s.paddr 
  3  and s.sid=sys_context('userenv','sid');

SPID
------------------------
121080

SQL> -- Trace open calls from my session. 
SQL> -- Without the &, the host call never returns!
SQL> -- We know that it is the utl_file.fopen call that is 
SQL> -- failing, so only trace open calls.
SQL> host strace -e trace=open -p #unix_pid & echo $! > tmp.pid
Process 121080 attached - interrupt to quit

SQL> -- Paste in the offending code...
SQL> declare
  2    v_fd utl_file.file_type;
  3  begin
  4    v_fd:=utl_file.fopen('MY_DIRECTORY','norman.txt','w');
  5    utl_file.fclose(v_fd);
  6 end;
  7 /

That throws up the following helpful message, followed closely by the expected Oracle exception message again (not shown):

open(" /logfiles/MYDB/norman.txt", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 ENOENT (No such file or directory)

At this point, I need to press CTRL-C to detach the strace session.

SQL> ^C
Process 121080 detached

Looking at the above message, I can see (almost) straight away that the file path has a leading space. This implies that whoever set up the original directory, created it with a minor typo that is hard to detect when looking at DBA_DIRECTORIES.

The fix was simple:

SQL> create or replace directory MY_DIRECTORY as '/logfiles/MYDB';
SQL> grant  read, write on directory MY_DIRECTORY to [whoever needs it];

And now, past in the offending code again, and it “just works”:

SQL> declare
  2    v_fd utl_file.file_type;
  3  begin
  4    v_fd:=utl_file.fopen('MY_DIRECTORY','norman.txt','w');
  5    utl_file.fclose(v_fd);
  6 end;
  7 /

PL/SQL procedure successfully completed.

Archivelog Deletion Policy Changes Don’t Always Take Immediate Effect.

$
0
0

The standby database had the RMAN archivelog deletion policy set to ‘NONE’ instead of being ‘APPLIED ON ALL STANDBY’ and the FRA filled up to within an inch of its life, or 99% of its allocated quota! Not a major problem as this database was not in production, but still, an alert is an alert and has to be dealt with. However, things did not go quite as expected.

First things first, check the archivelog deletion policy on the standby database:

$ rman target / catalog user/password@catalog

RMAN> show archivelog deletion policy;
RMAN configuration parameters for database with db_unique_name xxxxxxxx are:
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;

RMAN> configure archivelog deletion policy to applied on all standby;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO NONE;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters are successfully stored

RMAN> exit 

As the FRA was currently at 99% usage, I expected Oracle to start clearing out archived logs that had been applied on the standby, in other words, the vast majority of them, Strangely, this didn’t happen. Never mind, sometimes you have to encourage Oracle by transporting a new archived log from the primary, which usually kicks off the tidy up process. Let’s do that, on the primary database.

$ sqlplus / as sysdba

SQL> alter system archive log current;
System altered.

SQL> exit

Back on the standby database, tailing the alert log shows that the new archived log had arrived and been applied, but still no deletions. Hmmm, maybe a little more encouragement is required.

$ sqlplus / as sysdba

SQL> show parameter recovery_file_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest                string      +FRA
db_recovery_file_dest_size           big integer 40G

So, we have a quota of 40gb in the FRA for this standby, it’s not yet in production, so that’s a reasonable quota. How much is actually used? We know it’s a lot – 99% – from the alert that started all this off, but let’s check anyway.

SQL> set lines 300 pages 300 numw 15 trimspool on

SQL> col name format a10
SQL> col space_limit format 999,999,999,999
SQL> col space_used format 999,999,999,999
SQL> col space_reclaimable format 999,999,999,999
SQL> col number_of_files format 9,999,999

SQL> select * from v$recovery_file_dest;

NAME            SPACE_LIMIT       SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES          CON_ID
---------- ---------------- ---------------- ----------------- --------------- ---------------
+FRA         42,949,672,960   42,652,925,952    42,633,003,008           1,313               0

Pretty much, all of it, as expected. What makes up the usage? Is it all archived logs or could there be restore points and/or flashback logs as well.

SQL> col percent_space_used format 990.00
SQL> col percent_space_reclaimable format 990.00

SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES          CON_ID
----------------------- ------------------ ------------------------- --------------- ---------------
CONTROL FILE                          0.00                      0.00               0               0
REDO LOG                              0.00                      0.00               0               0
ARCHIVED LOG                         99.31                     99.26           1,313               0
BACKUP PIECE                          0.00                      0.00               0               0
IMAGE COPY                            0.00                      0.00               0               0
FLASHBACK LOG                         0.00                      0.00               0               0
FOREIGN ARCHIVED LOG                  0.00                      0.00               0               0
AUXILIARY DATAFILE COPY               0.00                      0.00               0               0

In this case, it’s definitely archived logs and the vast majority can be reclaimed. Time to get rid! We know that gentle persuasion didn’t have any effect, so let’s increase the pressure a little:

SQL> alter system set db_recovery_file_dest_size = 35g scope=memory;
System altered.

And tailing the alert log, in another session, shows that things are finally happening.

Fri Dec 11 14:03:13 2015
ALTER SYSTEM SET db_recovery_file_dest_size=35G SCOPE=MEMORY;
Fri Dec 11 14:03:13 2015
Deleted Oracle managed file +FRA/xxxxxxxx/ARCHIVELOG/2015_09_29/thread_1_seq_70.835.891673917
Deleted Oracle managed file +FRA/xxxxxxxx/ARCHIVELOG/2015_09_29/thread_1_seq_71.750.891675635
Deleted Oracle managed file +FRA/xxxxxxxx/ARCHIVELOG/2015_09_29/thread_1_seq_72.751.891675635
Deleted Oracle managed file +FRA/xxxxxxxx/ARCHIVELOG/2015_09_29/thread_1_seq_73.752.891675639
Deleted Oracle managed file +FRA/xxxxxxxx/ARCHIVELOG/2015_09_29/thread_1_seq_74.754.891680149
...

A few more gentle reductions, to avoid completely hanging things up, and we were eventually right down at 2gb FRA quota. Finally, I reset the quota back to the original 40gb and checked again:

SQL> alter system set db_recovery_file_dest_size = 40g scope=memory;
System altered.

SQL> select * from v$recovery_file_dest;

NAME            SPACE_LIMIT       SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES          CON_ID
---------- ---------------- ---------------- ----------------- --------------- ---------------
+FRA         42,949,672,960    2,112,880,640     2,092,957,696              61               0


SQL> select * from v$flash_recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES          CON_ID
----------------------- ------------------ ------------------------- --------------- ---------------
CONTROL FILE                          0.00                      0.00               0               0
REDO LOG                              0.00                      0.00               0               0
ARCHIVED LOG                          4.92                      4.87              61               0
BACKUP PIECE                          0.00                      0.00               0               0
IMAGE COPY                            0.00                      0.00               0               0
FLASHBACK LOG                         0.00                      0.00               0               0
FOREIGN ARCHIVED LOG                  0.00                      0.00               0               0
AUXILIARY DATAFILE COPY               0.00                      0.00               0               0

Job done. With a little encouragement!

Raspberry Pi, PiZero, Raspbian Jessie, Networking and WiFi Setup

$
0
0

With a title like that, I should get some hits! 😉

Since Jessie came along, networking have changed slightly, and there are numerous people having problems getting Jessie to connect to the internet, or just to WiFi. This might help. Oh, by the way, this is only a problem if the package named “raspberrypi-net-mods” has been installed – or so it seems.

If the above package has been installed, at least it backs up your old config files in /etc/network/*-old. So you can revert, if necessary. However, we want to use the latest and greatest, so read on.

There are a couple of files that need to be updated:

File /etc/network/interfaces

This file should resemble the following:

auto lo
iface lo inet loopback

auto eth0
allow-hotplug eth0
iface eth0 inet manual

auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

In case you are wondering, the “auto” lines mean that the Pi will attempt to bring up these networks at boot time. If you don’t want this, remove the lines and use sudo ifup eth0 or sudo ifup wlan0 to bring them up manually.

It’s probably best not to touch the “lo” device though, just in case! 😉

In the old days, the various iface lines would be similar to one of the following:

iface wlan0 inet static
iface wlan0 inet dhcp

But now, we leave them as manual.

There would, depending on the settings required, also be a few more lines to set the netmask, static IP address, gateway and such like. These live elsewhere now, so keep reading!

So far, so good. For a WiFi network, we need to update the wpa_supplicant.conf file.

File /etc/wpa_supplicant/wpa_supplicant.conf

Edit the file /etc/wpa_supplicant/wpa_supplicant.conf and make it look like the following example, similar that is, not identical.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
   ssid="YourNetworkSSID"
   psk="YourVerySecretPassword"
#   scan_ssid=1
#   priority=1
}

Ok, points to be very careful of:

  • There are no spaces between names, equal signs and parameters etc. If you have any spaces, networking will not start. Ask me how I know!
  • Lines with a ‘#’ as the first character are comments. They are ignored.
  • You need one network={...} entry for each WiFi network that you might use. Home, work, college, whatever. Each will have a different name in the ssid entry.
  • The scan_ssid and priority entries are optional.
  • Priority determines the order in which your Pi will connect if more than one of the networks is available. It’s best to give each one a different priority in that case! If more than one networks have the same priority, then the signal strength and/or security policy may be used to choose the best one to connect to.
  • Scan_ssid determines how the network will be scanned for. From the docs, and if you follow this, you are better than I am!

    SSID scan technique; 0 (default) or 1. Technique 0 scans for the SSID using a broadcast Probe Request frame while 1 uses a directed Probe Request frame. Access points that cloak themselves by not broadcasting their SSID require technique 1, but beware that this scheme can cause scanning to take longer to complete.

I have two WiFi networks that the Pi can connect to, so I have two different entries in my configuration file.

So, replace "YourNetworkSSID" above, with the name of your WiFi network. If the network doesn’t broadcast the SSID, then you need to know exactly what it is.

Replace "YourVerySecretPassword" with your own network’s password.

Save the changes. My own file looks like this:

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
   ssid="HomeWIFI"
   psk="YoullNeverGuessThisPassword"
}

network={
   ssid="OfficeWIFI"
   psk="YoullNeverGuessThisPasswordEither"
}

I have a home and office Wifi in the same place. I work from home and can use either.

In the past, you would set up your static IP addresses, gateways, netmasks etc in the /etc/network/interfaces file. No longer! It is now held in the /etc/dhcpcd.conf file. (Which I find very difficult to (a) remember and (b) type!

File /etc/dhcpcd.conf

Only do the following if you wish to assign a static IP address to your Pi when it’s on wired (eth0) and/or WiFi (wlan0) networks. I have mine set to have a static IP on both, so my file looks similar (ok, identical) to the example below.

Edit the file /etc/dhcpcd.conf and search for the interface eth0 or interface wlan0 lines. They may not be present. If you don’t find the entries, scroll to the end of the file.

Add the following configuration lines:

# Static IP configuration for eth0.
interface eth0
static ip_address=192.168.1.50/24
static routers=192.168.1.1
static domain_name_servers=8.8.4.4 8.8.8.8

# Static IP configuration for wlan0.
interface wlan0
static ip_address=192.168.1.51/24
static routers=192.168.1.1
static domain_name_servers=8.8.4.4 8.8.8.8

Some more points to be very careful of:

  • As mentioned above, you only need to do this if you wish to allocate a static IP address to your Pi when it’s on wired (eth0) and/or WiFi (wlan0) networks. If you don’t care what IP it has, simply ignore this section.
  • Again, there are no spaces between names, equal signs and parameters etc. If you have any spaces, networking will not start. Ask me how I know!
  • You need one interface ... entry for each iface ... that you have defined in /etc/network/interfaces – don’t set one up for the “lo” device though! The names must match too.
  • Yes, you do need to repeat the information for routers etc in each entry. They don’t get shared, which is a shame. It would be nice to have a global section where I could define these once, then only define the changeable stuff in each interface entry. Still, c’est la vie, as they say in Wales.

Ok, the static ip_address line defines that I’m using an IP address of 192.168.1.50 when on eth0, or wired network, and 192.168.51 when on WiFi. This allows me to use both, at the same time, if I wish, without an IP clash.

The /24 means that the first 24 bits of the IP address are my netmask, or 255.255.255.0. This identifies my network as 192.168.1.xxx and my devices as 50 and 51, in this case.

My router is on address 192.168.1.1, so that’s what goes into the routers entry.

I use Google’s name servers for my DNS name resolution, they are usually always available, but you can add others if you wish. Your ISP might have some that you can use. Separate each one with a space.

By the way, don’t use a name here, always use an IP address. You can’t resolve the names you use for the DNS servers if you don;t have a DNS server to resolve them! (No, I didn’t do that one myself, thanks for asking!)

Save the file.

That’s should do it. A quick reboot is probably the best way to make sure everything is set up correctly, so insert a slight pause here while we reboot.

sudo reboot

When the system comes back up, run the following command to see what appears:

hostname -I  ## That's a capital 'eye' not a 'one'.

You should see the two static IP addresses you configured, if you did so, and if you have both eth0 and wlan0 up and running, like I do. If you are only connected to WiFi, for example, you should only see the one IP address for that.

If you see an extra IP address, then you probably still have some dhcp or static entries in your /etc/network/interfaces file, in the old style format, that need to be removed. Make sure that your file looks like mine above to get rid of the ghost IP address.

HTH

Cheers.

Turn Off Task Bar Thumbnail Previews on Linux Mint Cinnamon 17.3

$
0
0

I hate it when I click on a task bar icon to open up a snoozing application, and a couple of seconds later, there’s a preview of the application showing at the bottom of the screen. I don’t need it as the icon tells me what it is, thanks! Getting rid is easy:

  • Click the Menu button. (Mint’s equivalent of the ‘start’ button in some other operating systems!)
  • Choose ‘Preferences’.
  • Choose ‘Applets’.
  • Scroll down, or search, for ‘Window List’.
  • Select it, click the ‘configure’ button.
  • Deselect ‘Show thumbnails on hover’.

You would, as I did, think that the setting might be found within the System Tray applet’s configuration, but you will find, as I did, that it doesn’t actually have any!

TraceMiner – An Oracle Utility to Mine 10046 Trace Files

$
0
0

Have you ever needed to trawl through an Oracle Trace file to extract the SQL statements executed and found a whole load of bind variables have been used, so you need to find the BINDS section, extract the values, and virtually paste them into the parsed SQL statement?

No? This utility isn’t for you then.

Trace Miner

TraceMiner, as it is known, is available for download as source code, from my Git Hub repository. Click on the Download Zip button to get hold of it, then simply unzip it, cd to the created folder, edit the config.h file to suit your system, and then execute make to build the TraceMiner utility.

The README files (either markdown or HTML) have all the details.

So, given a trace file – which must have binds (10046, level 4 or 12 etc), the output will look something like this:

------------------------------------------------------------------------------------------
Trace Line:         Cursor ID: SQL Text with binds replaced                           
------------------------------------------------------------------------------------------
        62:  #140136345356328: SELECT "LW_MEASURED_ID","SPECIES_RUN_ID","LENGTH","WEIGHT","SCALE_PACKET","CHARACTERISTIC_ID","AGE_BAND_ID","PRE_FIRST_SPAWN_ID","POST_FIRST_SPAWN_ID","TOTAL_SEA_AGE_ID","NUMBER_MARKS","NALL_AGE","INSERT_USER_NAME","INSERT_DATE","UPDATE_USER_NAME","UPDATE_DATE","TRAP_NUMBER","LW_MEASURED_COMMENT","ROWID" FROM "U_NFP"."NFP_LW_MEASURED" WHERE "ROWID"="AAAaPmAAGAAAW71AAY"
       146:                  : COMMIT

        96:  #140136346336472: INSERT INTO "U_NFP"."NFP_LW_MEASURED" ("SPECIES_RUN_ID","LENGTH","WEIGHT","SCALE_PACKET","CHARACTERISTIC_ID","AGE_BAND_ID","PRE_FIRST_SPAWN_ID","POST_FIRST_SPAWN_ID","TOTAL_SEA_AGE_ID","NUMBER_MARKS","NALL_AGE","TRAP_NUMBER","LW_MEASURED_COMMENT") VALUES (664499,137,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)
       170:                  : COMMIT

        26:  #140136346198152: select 00016EF5.0019.0006 from dual
       201:                  : COMMIT
...

So far, it has handled all the trace files I’ve thrown at it, but if yours breaks or doesn’t produce the correct results, give me a shout. My email is in the README.

There’s an option to run TraceMiner in --verbose (or just -v) mode. Don’t! You have been warned. However, if you do (and you really shouldn’t!) make sure to redirect stderr to a file which will get very very very big.


Does Your Raspberry Pi 3 Lose WiFi Connections After a While?

$
0
0

If you find that your Raspberry Pi 3, the new one (at the time of writing anyway!) with built in WiFi and Bluetooth, loses the WiFi connection after a period of inactivity, then this thread on the Raspberry Pi Forums, which will open in a new tab, might be of interest. Have a read. If you want to miss out on the preliminaries of the thread, start reading here instead.

Basically, all you need to do is:

sudo iw dev wlan0 set power_save off

I’m not affected yet, but I’m making a note here, just in case!

Printing, Completing & Scanning PDF Documents

$
0
0

As a contractor I often have to fill in and sign various contract agreements. These are usually tens of pages in length, and while I only have to sign one page, I still must scan in and send back each and every page, even the untouched ones. It would help if a PDF with form filling abilities was supplied, but hey, that’s only rarely the case. This is how I do it. (On Linux – Windows users mileage may vary!)

First of all, install pdftk. You only have to do this once.

sudo apt-get update
sudo apt-get install pdftk

Your commands may vary, I’m using Linux Mint 17.3 at the time of writing. Use whatever your package manager requires.

Next, create a couple of useful utility scripts to split a large PDF file into separate pages, and another to join the pages back up into one file again.

Split Out the Pages

The script to split an input PDF file into separate pages is called splitPDF.sh, and is as follows. Please be aware that there is very limited error checking – I’m supposed to know what I’m doing! (Famous last words!)

ORIGINALPDF="${1}"
PAGESPDF="${ORIGINALPDF%%pdf}page_%04d.pdf"
echo "Processing \"${ORIGINALPDF}\" to \"${PAGESPDF}\""

pdftk "${ORIGINALPDF}" \
      burst \
      output "${PAGESPDF}"

echo " "
echo "Pages created."
echo "Copy/Overwrite replacement pages with same file name, then,"
echo "./joinPDF.sh \"${ORIGINALPDF}\"."
echo " "

What this code does is accepts the full PDF file name on the command line, wrapped in double quotes if there are spaces or other special characters, and splits it into separate pages. If, for example, the file is called Contract.pdf, running the script as ./splitPDF.sh Contract.pdf will produce a number of files named Contract.page_0001.pdf, Contract.page_0002.pdf and so on until there is a separate file for each page of the original document.

All Join Together

To join the separate pages back together again is dependent on the following script, named joinPDF.sh:

ORIGINALPDF="${1}"
UNCOMPRESSEDPDF="${ORIGINALPDF%%pdf}uncompressed.pdf"
COMPLETEDPDF="${ORIGINALPDF%%pdf}completed.pdf"
PAGESPDF="${ORIGINALPDF%%pdf}"

echo "Processing \"${PAGESPDF}page_*.pdf\" to \"${UNCOMPRESSEDPDF}\""

pdftk "${PAGESPDF}"page_*.pdf \
      output "${UNCOMPRESSEDPDF}"

echo "\"${UNCOMPRESSEDPDF}\" created."
echo "Compressing... "

pdftk "${UNCOMPRESSEDPDF}" \
      output "${COMPLETEDPDF}" \
      compress

echo "\"${COMPLETEDPDF}\" created."

Again, this is executed with the original file name used when splitting the pages. For example, with our example above, we would run the following command ./joinPDF.sh Contract.pdf. This will take all the pages named Contract.page_000n.pdf, in order, and create a temporary uncompressed file (in some cases, but not all) named Contract.uncompresses.pdf. This file will be compressed into the final output file, named Contract.completed.pdf.

The scripts will not overwrite the original files, just in case, and all temporary files are left around afterwards as well. You never know.

In Use

Using the utilities above is simple. The workflow, if it can be called such, is as follows:

  • Save a copy of the master file somewhere temporary.
  • Split out the individual pages.
  • Print, sign, scan the pages that must be signed, etc.
  • Copy the newly scanned pages over the originals.
  • Join everything back together.

So, here’s a worked example from my latest contract which required 13 pages to be returned with only page 12 requiring a signature.

norman@hubble  $ ### List current files:
norman@hubble  $ ls
Contract.pdf  joinPDF.sh  splitPDF.sh
norman@hubble  $ ### Split Contract.pdf into pages:
norman@hubble  $ ./splitPDF.sh Contract.pdf 
Processing "Contract.pdf" to "Contract.page_%04d.pdf"
 
Pages created.
Copy/Overwrite replacement pages with same file name, then,
./joinPDF.sh "Contract.pdf".
norman@hubble  $ ### Check result:
norman@hubble  $ ls
Contract.page_0001.pdf  Contract.page_0007.pdf  Contract.page_0013.pdf
Contract.page_0002.pdf  Contract.page_0008.pdf  Contract.pdf
Contract.page_0003.pdf  Contract.page_0009.pdf  doc_data.txt
Contract.page_0004.pdf  Contract.page_0010.pdf  joinPDF.sh
Contract.page_0005.pdf  Contract.page_0011.pdf  splitPDF.sh
Contract.page_0006.pdf  Contract.page_0012.pdf
norman@hubble  $ ### Overwrite original page 12 with signed & scanned page 12:
norman@hubble  $ cp /tmp/Contract_page_12.pdf ./Contract.page_0012.pdf
norman@hubble  $ ### Merge originals plus signed page:
norman@hubble  $ ./joinPDF.sh Contract.pdf
Processing "Contract.page_*.pdf" to "Contract.uncompressed.pdf"
"Contract.uncompressed.pdf" created.
Compressing... 
"Contract.completed.pdf" created.
norman@hubble  $ ### Check result:
norman@hubble  $ ls
Contract.completed.pdf  Contract.page_0007.pdf  Contract.pdf
Contract.page_0001.pdf  Contract.page_0008.pdf  Contract.uncompressed.pdf
Contract.page_0002.pdf  Contract.page_0009.pdf  doc_data.txt
Contract.page_0003.pdf  Contract.page_0010.pdf  joinPDF.sh
Contract.page_0004.pdf  Contract.page_0011.pdf  splitPDF.sh
Contract.page_0005.pdf  Contract.page_0012.pdf
Contract.page_0006.pdf  Contract.page_0013.pdf

And finally, I can email the file Contract.completed.pdf back to the agent and I’m ready to start work. :-)

Tnsnames.ora, IFILE and Network Drives on Windows

$
0
0

I’ve recently begun a new contract migrating a Solaris 9i database to Oracle 11gR2 on Windows, in the Azure cloud. I hate windows with a vengeance and this hasn’t made me change my opinion!

One of the planned improvements is to have everyone using a standard, central tnsnames.ora file for alias resolution. A good plan, and the company has incorporated my own tnsnames checker utility to ensure that any edits are valid and don’t break anything.

I found that the tnsnames.ora in my local Oracle Client install, was not working. Here’s what I had to do to fix it.

In my local tnsnames.ora, I had something like the following:

IFILE="\\servername\share_name\central_tnsnames\tnsnames.ora"

(Server names etc have been obfuscated to protect the innocent!)

However, using the above caused tnsping commands, or connection attempts to time out or simply fail:

tnsping barney

TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 19-MAY-2
016 12:16:42

...

TNS-03505: Failed to resolve name

If the standard tnsnames.ora file was copied locally, and IFILE‘d, then it all just worked as expected.

The problem is simple, Oracle isn’t fond of IFILEing files from networked drives. So, to get around this, I needed to map a network drive instead, and use the drive specifier in my IFILE.

First map a persistent network drive to be my (new) Y: drive. This should be reconnected at logon until further notice. Note that this mapping uses my current credentials to make the connection.

net use Y: \\servername\share_name /PERSISTENT:YES

And in my tnsnames.ora, I now have this:

IFILE="Y:\central_tnsnames\tnsnames.ora"

And now, it all just works!

C:\Users\ndunbar\Downloads>tnsping barney

TNS Ping Utility for 64-bit Windows: Version 11.2.0.1.0 - Production on 19-MAY-2
016 12:21:23

...

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION= (ADDRESS= (PROTOCOL=TCP) (HOST=bedrock) (PORT=1521)) (CONNECT_DATA= (SERVER=dedicated) (SERVICE_NAME=barney)))
OK (160 msec)

HTH

RMAN Connection Troubles, RMAN-03010 & RMAN-10038

$
0
0

For no reason, after many weeks of use, RMAN suddenly cannot connect:

rman target sys/******@dbadb01 catalog ...

...
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03010: fatal error during library cache pre-loading
RMAN-10038: database session for channel default terminated unexpectedly

Setting debug and trace on the command line has no effect, there is nothing of use in the trace file.

rman target sys/****** debug all trace=trace.log

The contents of trace.log>/code> after this were as follows, which is pretty much normal, except for the error messages at the end. No help at all in other words.

DBGMISC:    ENTERED krmksimronly [09:55:45.190]
...
Calling krmmpem from krmmmai
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03010: fatal error during library cache pre-loading
RMAN-10038: database session for channel default terminated unexpectedly

Setting event 10046 on the entire database with alter system ... and/or running a session 10046 trace for SYS logins using a database trigger also revealed nothing. Not even a trace file for any RMAN sessions.

Much searching with Google resolved nothing. My MOS account is not connected yet, despite requests, to the current support identifier, so I can't go hunting for a fix on MOS. And the admins are off today as well.

However, this was a subtle clue:

set oracle_sid=dbadb01
rman target / catalog ...
...
connected to target database: dbadb02 (DBID=1170775433)
...

Really? Dbadb02 02? I asked for 01, so what's going on?

The database, dbadb02, was cloned, using RMAN yesterday. It should have a new DBID and such like, so let's check:

tnsping dbadb01

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = test_server)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbadb01)))
OK (20 msec)

So far so good, next:

tnsping dbadb02

Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = test_server)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = dbadb02)))
OK (20 msec)

That looks fine too. Next, check the database:

set oracle_sid=dbadb02
sqlplus / as sysdba

select name,value
from v$parameter
where lower(value) like '%dbadb01%';

NAME	VALUE
------------------------------------------------------
instance_name     dbadb01
service_names     dbadb01
dispatchers      (PROTOCOL=TCP) (SERVICE=dbadb01XDB)
audit_file_dest	 C:\ORACLEDATABASE\ADMIN\dbadb01\ADUMP

Bingo! RMAN doesn't change these parameters after a clone, so the database needs fixing. It looks like RMAN is attempting to connect to the dbadb02 database with the service name of dbadb01, rather than connecting to dbadb01 on that service name.

There was supposed to be a script executed by the clone script, to fixup those parameters but obviously it didn't work, or some other fault occurred (The DBA responsible will getting a slapped wrist shortly!) - log files will need to be checked!

A quick fix later:

alter system set instance_name='dbadb02' scope=spfile;
alter system set service_names='dbadb02' scope=spfile;
alter system set dispatchers='(PROTOCOL=TCP) (SERVICE=dbadb02XDB)' scope=spfile;
alter system set audit_file_dest='C:\ORACLEDATABASE\ADMIN\dbadb02\ADUMP' scope=spfile;
startup force

select name,value
from v$parameter
where lower(value) like '%dbadb01%';

no rows selected

And now, does RMAN work?

rman target sys/******@dbadb01
...
connected to target database: dbadb01 (DBID=673233917)

Result!

How to Start an Oracle Database When You Are Not in the DBA Group

$
0
0

This applies to Linux, Unix as well as Windows, but affected me on a Windows 2012 Server running Oracle 11.2.0.4 Enterprise Edition.

My user on the server was an administration user, but not in the ora_dba group. This is required to connect / as sysdba within SQL*Plus. The SYS password had been changed recently but whoever did it, did not update the password vault. The users were urgently requiring their database be started, I was the only DBA in the office, the SYS password was unknown, and my user didn’t belong directly to the ora_dba group. What to do?

It’s not quite the dreadful hack that the title of this post may indicate. Depending on the setup for the server, you may need administrator rights to move files around. Plus, most importantly, you do need to know the SYS password for at least one database on the server.

My user account was indirectly a member of the ora_dba group, via my administrator rights, but it seems I need to be directly a member of the group to login / as sysdba.

That said, the short, bullet point method is as follows:

  • Cd %ORACLE_HOME%\database
  • set ORACLE_SID=dbadb01.
  • Rename the current password file pwddbadb01.ora to pwddbadb01.ora.keep.
  • Copy another password file, for which I did know the SYS password, to pwddbadb01.ora.
  • Sqlplus sys/known_password as sysdba.
  • startup.
  • exit.
  • Delete pwddbadb01.ora.
  • Rename pwddbadb01.ora.keep to pwddbadb01.ora.

This way I got the database started, the users were happy, and I made sure I got the password vault updated to save me this grief next time!

Dropping Temporary Tables (With Bonus, Broken Check Constraints!)

$
0
0

I found a broken check constraint, one that simply wouldn’t work, on a database. It was created as:

... CHECK(COLUMN_NAME IN ('Y','N',NULL)) ;

Try it yourself, it doesn’t work! Anyway, I needed to find if there were any other check constraints broken in this manner, so I did the following:

    select owner, 
           table_name, 
           constraint_name, 
           search_condition
    from   dba_constraints
    where  owner = 'XXXXX' 
    and    constraint_type = 'C'
    and    upper(search_condition) like '%IN%,%NULL%'
    order  by 1,2,3;    

Of course, that barfed because the SEARCH_CONDITION column is a LONG data type. Sigh! I thought those things were deprecated! Never mind, I did this next:

-- Can't filter search_condition as it's a LONG data type.
create global temporary table check_constraints on commit preserve rows
as (
    select owner, 
           table_name, 
           constraint_name, 
           search_condition
    from   dba_constraints
    where  owner = 'XXXXX' 
    and    constraint_type = 'C'
) ;

select * from check_constraints
where upper(search_condition) like '%IN%,%NULL%';

-- Do other meaningful stuff here ....
-- Then eventually ...

drop table check_constraints;

But the DROP TABLE command resulted in an error ORA-14452: attempt to create, alter or drop an index on temporary table already in use. Hmm!

First of all, I had no indexes, so the message is only slightly misleading, but regardless, I couldn’t drop my temporary table when I was finished with it.

The solution is amazingly simple:

truncate table check_constraints;

After that, dropping the table “just works”.

And yes, there were quite a few broken check constraints. Duvelopers!

What’s Broken? Well, if you create a check constraint as per the one listed back at the start of this eRant, you will not see any errors from Oracle. Nor will you see any errors when you INSERT or UPDATE rows with invalid values in the column. It’s that NULL thing that kills your constraint. It simply means that any value you have in the column will be accepted.

drop table test cascade constraints purge;

create table test(a varchar2(1));

alter table test add constraint chk_a
check (a in ('Y','N', NULL));

insert into test(a) values ('N');
insert into test(a) values ('Y');
insert into test(a) values (NULL);
insert into test(a) values ('T');

Huh? That last one couldn’t have worked, could it?

select A from test;

 A
--
 N
 Y

 T

Yup, the constraint is indeed useless.

Have fun!

Which Extra Cost Oracle Options is my Windows Server Running?

$
0
0

It’s always nice to know which extra cost Oracle options are enabled, whether deliberately or silently as the result of some patching that has taken place.

Copy and paste the code below into a Windows command file named – in my case – checkChopyOptions.cmd and execute it against any Oracle Home. It is called as follows:

checkChoptOptions [Oracle_home]

The Oracle Home is optional and if omitted, will check the currently set %ORACLE_HOME% location.

This version has been tested on a Windows 2012 server running Oracle 11.2.0.4.

The script displays output on the screen and also to a logfile in the script directory. If you see that an option is both enabled and disabled, beware – someone has executed chopt, probably without adminstrator rights and may well have created an empty *.dll.dbl file. You should probably delete the *.dll.dbl and do a proper chopt disable as admninistrator.

Enjoy.

@echo off
Rem   ================================================================
Rem  |     Check an existing Oracle Home for expensive EE Options     |
Rem  |                                                                |
Rem  | Norman Dunbar.                                                 |
Rem  | August 2016.                                                   |
Rem   ================================================================
Rem 
Rem 
Rem   ================================================================
Rem  | USAGE:                                                         |
Rem  |                                                                | 
Rem  | checkChoptOptions [oracle_home]                                | 
Rem  |                                                                |
Rem   ================================================================
Rem 
setlocal EnableDelayedExpansion


Rem   ================================================================
Rem  | Internal Variables.                                            |
Rem   ================================================================
set VERSION=1.00
set ERRORS=0
set MYLOG=.\%0.log
set ORA_HOME=%1

Rem   ================================================================
Rem  | The next two define the first and last entry in the following  |
Rem  | "arrays" which are not really arrays, honest!                  |
Rem   ================================================================
set FirstEntry=0
set LastEntry=6


Rem   ================================================================
Rem  | The following "arrays" are not arrays. They look like they are | 
Rem  | but are actually just a pile of scalar variables with '[n]' in |
Rem  | their name. Oh, and we have to use '!variable_name! later on   |
Rem  | too for some unfathomable reason.                              |
Rem   ================================================================

Rem   ================================================================
Rem  | List of Oracle chopt'able options.                             |
Rem   ================================================================
set Option[0]=Partitioning
set Option[1]=OLAP
set Option[2]=Label Security
set Option[3]=Data Mining
set Option[4]=Database Vault option
set Option[5]=Real Application Testing
set Option[6]=Database Extensions for .NET

Rem   ================================================================
Rem  | List of DLLs that exist for enabled options.                   |
Rem   ================================================================
set Enabled[0]=oraprtop11.dll
set Enabled[1]=oraolapop11.dll
set Enabled[2]=oralbac11.dll
set Enabled[3]=oradmop11.dll
set Enabled[4]=oradv11.dll
set Enabled[5]=orarat11.dll
set Enabled[6]=clr

Rem   ================================================================
Rem  | List of DLLs that exist for disabled options.                   |
Rem   ================================================================
set Disabled[0]=oraprtop11.dll.dbl
set Disabled[1]=oraolapop11.dll.dbl
set Disabled[2]=oralbac11.dll.dbl
set Disabled[3]=oradmop11.dll.dbl
set Disabled[4]=oradv11.dll.dbl
set Disabled[5]=orarat11.dll.dbl
set Disabled[6]=clr.dbl

Rem   ================================================================
Rem  | Clear any existing logfile.                                    |
Rem   ================================================================
Rem 
del %MYLOG% > nul 2>&1

call :log %0 - v%VERSION% : Logging to %MYLOG%
call :log Executing: %0 %*


:check_oracle_home
if "%ORA_HOME%" EQU "" (
    set ORA_HOME=%ORACLE_HOME%
)

if "%ORACLE_HOME%" EQU "" (
	call :log ORACLE_HOME is not defined.
	set ERRORS=1
)

if not exist %ORA_HOME% (
    call :log ORACLE_HOME "%ORA_HOME%" - not found.
    set ERRORS=1
)

:check_errors

if %ERRORS% EQU 1 (
	call :log Cannot continue - too many errors.
	goto :eof
)


Rem *******************************************************************
Rem *******************************************************************
:JDI

call :log Checking ORACLE_HOME = "%ORA_HOME%".

for /L %%f in (%FirstEntry%, 1, %LastEntry%) do (

    Rem Is this option enabled?
    if exist %ORA_HOME%\bin\!Enabled[%%f]! (
        call :log !Option[%%f]! - is currently enabled.
    )
    
    Rem Also check if it is disabled too. This needs investigating.
    if exist %ORA_HOME%\bin\!Disabled[%%f]! (
        call :log !Option[%%f]! - is currently disabled.
    )
)
Rem *******************************************************************
Rem *******************************************************************

Rem   ================================================================
Rem  | And finally, turn off the "doing it correctly" setting.        |
Rem  | And skip over the sub-routines.                                |
Rem   ================================================================
Rem 
call :log %0 - complete.
endlocal
exit /b



Rem   ================================================================
Rem  |                                                          LOG() |
Rem   ================================================================
Rem  | Set up a logging procedure to log output to the %MYLOG% file.  |
Rem  | Each line is yyyy/mm/dd hh:mi:ss:                       |
Rem   ================================================================
:log

echo %*
echo %date:~6,4%/%date:~3,2%/%date:~0,2% %time:~0,8%: %* >> %MYLOG%
goto :eof

A test run gives the following, on screen:

C:\Users\ndunbar\Desktop>CheckOracleOptions.cmd
CheckOracleOptions.cmd - v1.00 : Logging to .\CheckOracleOptions.cmd.log
Executing: CheckOracleOptions.cmd
Checking ORACLE_HOME = "C:\OracleDatabase\product\11.2.0\dbhome_1".
Partitioning - is currently disabled.
OLAP - is currently disabled.
Label Security - is currently disabled.
Data Mining - is currently disabled.
Database Vault option - is currently disabled.
Real Application Testing - is currently enabled.
Database Extensions for .NET - is currently disabled.
CheckOracleOptions.cmd - complete.

And the logfile looks like this:

2016/08/25 15:29:58: CheckOracleOptions.cmd - v1.00 : Logging to .\CheckOracleOptions.cmd.log 
2016/08/25 15:29:58: Executing: CheckOracleOptions.cmd 
2016/08/25 15:29:58: Checking ORACLE_HOME = "C:\OracleDatabase\product\11.2.0\dbhome_1". 
2016/08/25 15:29:58: Partitioning - is currently disabled. 
2016/08/25 15:29:58: OLAP - is currently disabled. 
2016/08/25 15:29:58: Label Security - is currently disabled. 
2016/08/25 15:29:58: Data Mining - is currently disabled. 
2016/08/25 15:29:58: Database Vault option - is currently disabled. 
2016/08/25 15:29:58: Real Application Testing - is currently enabled. 
2016/08/25 15:29:58: Database Extensions for .NET - is currently disabled. 
2016/08/25 15:29:58: CheckOracleOptions.cmd - complete. 

Oraenv for Windows

$
0
0

Having recently had to learn a whole new way of working when I took on a contract migrating a database to the Windows “cloud”, I realised that there’s no equivalent to the useful Unix oraenv utility. I had to write my own. Give me a bash shell any day!

The utility is oraenv.cmd and executes like this:

set ORATAB=c:\users\ndunbar\oratab
...
oraenv

Update 30/08/2016: You can now pass the desired SID on the command line and avoid all that prompting stuff! Like this:

oraenv AZDBA01
...

Obviously, the ORATAB environment variable can be set in Control Panel, or in the shell session previously etc. As long as it is set somewhere.

The %ORATAB% file needs to look like the following:

ORACLE_SID | ORACLE_HOME | Optional comment text.

The default Unix separator of a colon, ‘:’, cannot be used here as the ORACLE_HOME field will no doubt have a colon in its name, given that Windows uses it as part of the drive specification. To get around that foible, I use a pipe character – ‘|’.

My own oratab file looks like this:

AZDBA01|C:\OracleDatabase\product\11.2.0\dbhome_1|# Staging database
AZDBA02|C:\OracleDatabase\product\11.2.0\dbhome_1|# Test clone
AZDBA91|C:\OracleDatabase\product\11.2.0\dbhome_1|# Standby for AZDBA01
AZDEV08|C:\OracleDatabase\product\11.2.0\dbhome_1|# Development
AZDEV12|C:\OracleDatabase\product\11.2.0\dbhome_1|# Development

Comments are not mandatory, but if present, there must be a pipe character – ‘|’ – after the end of the Oracle Home or the comment becomes part of the %ORACLE_HOME% environment variable if not. Ask me how I know this!

In use, an example of the utility’s output would be similar to the following:

C:\Users\ndunbar>oraenv
Your session's current Oracle SID is 'azdba02'.

Please enter a new Oracle SID from the following list:
AZDBA01
AZDBA02
AZDBA91
AZDEV08
AZDEV12

Press ENTER/RETURN to use the current ORACLE_SID.
New SID [azdba02]: azdba91
ORACLE_HOME\bin is already on PATH.
ORACLE_SID has been set to 'azdba91'.
ORACLE_HOME has been set to 'C:\OracleDatabase\product\11.2.0\dbhome_1'.
NLS_LANG has been set to 'AMERICAN_AMERICA.WE8ISO8859P1'.
NLS_DATE_FORMAT has been set to 'yyyy/mm/dd hh24:mi:ss'.

As noted, the current %ORACLE_SID% is the default and pressing RETURN accepts that and leaves things unchanged.

If the desired %ORACLE_HOME% is on the %PATH% already, it is not added again. And this identifies a problem.

If there are numerous different Oracle Homes in use on the server, then the new one will be added to %PATH% but the old one(s) will not, at present, be removed. I need to find a decent stream editor – like sed – for Windows to allow me the opportunity to do that. However, in my current installations, we have a single Oracle Home for all our databases on the servers, so that problem isn’t affecting me at present. Famous last words?

In the event of a problem, the following error codes are returned:

  • 0 = All ok.
  • 1 = ORATAB environment variable not set.
  • 2 = %ORATAB% not pointing to an (accessible) file.
  • 3 = Requested Oracle SID not found in %ORATAB% file.

Anyway, here’s the code. Copy this and paste into your own oraenv.cmd, somewhere on your %path%, and you are all set to go. Don’t forget to set ORATAB first though.

Enjoy.

@echo off

REM =================================================================
REM Windows version, sort of, of the Unix oraenv script which
REM will set the desired Oracle Environment.
REM 
REM Requires the %ORATAB% environment variable pointing at a suitable
REM text file, which has the lines set up in the following format:
REM
REM SID | ORACLE_HOME
REM
REM We can't use the Unix default separator of a colon (:) as that 
REM is used already for the drive specifiers.
REM
REM EXIT CODES:
REM
REM 0 = All ok, environment set as requested.
REM 1 = Oops! ORATAB env var not set.
REM 2 = Oops! %ORATAB% not pointing to an accessible file.
REM 3 = Oops! Requested ORACLE_SID not found in %ORATAB% file.
REM
REM =================================================================
REM Norman Dunbar (norman@dunbar-it.co.uk)
REM June 2016.
REM
REM 30/08/2016 - Added ability to pass SID on command line.
REM =================================================================

REM Check if %ORATAB% is already set. Bail out if not.
REM This could be set in the System applet for Control Panel,
REM or, set in the shell prior to calling this code.
if "%ORATAB%"=="" (
    echo ORATAB not set. Cannot continue.
    exit /b 1
)

REM ORATAB needs to point at a file.
if NOT exist %ORATAB% (
    echo Cannot find the file '%ORATAB%'. Cannot continue.
    echo Check the value in ORATAB is correctly set, or, that
    echo the file exists.
    exit /b 2
)

REM Did we have a SID passed as a parameter?
set ORA_SID=%1

REM We don't do the next bit if we have a SID already.
if "%ORA_SID%"=="" (

    REM Display the Current ORACLE_SID.
    echo Your session's current Oracle SID is '%oracle_sid%'.
    SET ORA_SID=%oracle_sid%
    echo.

    REM List the available SIDs from the oratab file.
    echo Please enter a new Oracle SID from the following list:
    for /f "tokens=1 delims=|" %%a in (%ORATAB%) do (
        echo %%a
    )
    echo.

    REM Default to the current SID if the user just presses ENTER.
    echo Press ENTER/RETURN to use the current ORACLE_SID.
    SET /P ORA_SID="New SID [%ORA_SID%?]: "
)

REM Check the %ORATAB% file to see if this ORACLE_SID is listed
REM If it's not then exit to the command prompt with error 3.
FIND /I "%ORA_SID%|" %ORATAB% > nul
IF NOT %ERRORLEVEL%==0 (
    echo Oracle SID not found
    exit /b 3
)


REM Set the ORACLE_SID.
set ORACLE_SID=%ORA_SID%


REM Now get the Oracle Home from the oratab file.
FOR /F "tokens=2 delims=|" %%a IN ('FIND /I "%ORA_SID%|" %ORATAB%') DO SET ORACLE_HOME=%%a


REM Next thing to do is set the Path. Ok, possible problem area!
REM In my own environment, everything has the same ORACLE_HOME so
REM there's no need to worry about removing any other ORACLE_HOME
REM from the PATH before adding this one. I need to think about this.
echo %PATH% | find /i "%ORACLE_HOME%\bin" > nul
if NOT %ERRORLEVEL%==0 (
    SET PATH=%ORACLE_HOME%\bin;%PATH%
) else (
    echo ORACLE_HOME\bin is already on PATH. 
)    


REM And as a nice little touch we will Print out some details for the user.
ECHO ORACLE_SID has been set to '%ORACLE_SID%'.
ECHO ORACLE_HOME has been set to '%ORACLE_HOME%'.

REM Uncomment this if you want your NLS_LANG and NLS_DATE_FORMATs setting.
set NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
echo NLS_LANG has been set to '%NLS_LANG%'.

set NLS_DATE_FORMAT=yyyy/mm/dd hh24:mi:ss
echo NLS_DATE_FORMAT has been set to '%NLS_DATE_FORMAT%'.

exit /b 0
@echo on

Oraenv for Windows – Updated

Interesting Foible with Oracle Dates

$
0
0

I have a table with dates in, and some NULLs. Two people, on the same database, running the same SELECT query, in the same schema, with the same privileges, get vastly differing results. Why? Fine Grained Auditing is not at play here.

Table names, column names etc have been changed to protect the guilty.

In the table in question, the A_DATE column is correctly defined as DATE, rather than anything else unsuitable.

I have boiled the problem down to the following code. The table I’m using has 76 rows of which, 27 are NULLs in the A_DATE column. It took a while to notice the bug in the code though, maybe I should do some more development work?

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

It looks ok, it does not give any errors, but running it gives inconsistent results depending on the setting of NLS_DATE_FORMAT:

alter session set nls_date_format='dd/mm/yyyy';

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

COUNT(*)
-------
     27
alter session set nls_date_format='dd-mon-rr';

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

COUNT(*)
-------
     27
alter session set nls_date_format='dd-mon-yy';

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

COUNT(*)
-------
      0

WTH? Zero? Really?

alter session set nls_date_format='dd-mon-yyyy';

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

COUNT(*)
-------
     27
alter session set nls_date_format='mm-dd-yyyy';

select count(*) from norm 
where 
trim(nvl(a_date, to_date('07/04/1960','dd/mm/yyyy'))) = to_date('07/04/1960','dd/mm/yyyy');

COUNT(*)
-------
     27

So, what’s going on here? Well, it seems from the docs that the TRIM() function is not really supposed to be applied to dates, but Oracle doesn’t complain. It returns a VARCHAR2 value, and not a DATE value as the code appears to return.

This VARCHAR2 is then compared with a DATE value given on the right side of the ‘=’, so there is a bit of implicit conversion going on, and I’m positive that the DATE is converted to a VARCHAR2 for the comparison, and this is a bad way to compare DATE values, as VARCHAR2s. After all, 07/04/1960 is bigger than 01/05/2016 isn’t it? (No, it isn’t, well, not as a DATE, but as a VARCHAR2 …)

Some of the other non-null dates in the table are:

17/03/2016
11/12/2015
02/12/2014
30/10/2014
29/10/2014
02/10/2013
14/10/2009
08/07/2008
03/07/2008
24/06/2008
05/06/2008

The fix? Obvious really, the developer intended to use TRUNC() but mysteriously typed TRIM() instead. Once changed, it “just” worked – for all known values of NLS_DATE_FORMAT!

TraceMiner Utility – Updated

$
0
0

TraceMiner has been updated after a couple of foibles were found during the processing of a trace file.

The version has been bumped to 0.19 as of today, 2nd December 2016. The bugs fixed were:

  • The utility now notices exec ERROR lines as well as PARSE ERRORs. Just because it’s nice to know where things might have gone wrong. These are in addition to the PARSE ERRORs that it has been processing up until now.
  • Interesting bug, seemingly related to DBMS_METADATA.GET_CLOB calls where the value for one bind is the bind number of the next one. The text in the trace file is “value= Bind#” and is weird! It should be on two lines, everything after the equals sign, including the space, should be on the next line. Detected on Windows 11204 and on Solaris 11204.
  • Sort of fixed the problem where a bind can be used more than once in a statement. Flagged in the SQL as “__A_:BIND_REUSED__” at the moment. This will be properly fixed in future but where the same bind variable is used more than once in a statement, the code now (sort of) handles it correctly. An issue has been raised on GitHub for this.
  • Added test.cmd. Test harness for Windows users.

You can read more about TraceMiner here if you wish.

TraceMiner Updated Again

$
0
0

TraceMiner has been updated again. Mostly bug fixes, but there’s a little enhancement too. The current release is 0.21.

TraceMiner is a utility that parses an Oracle trace file, with binds listed (event 10046 level 4 or 12, etc) and extracts all the user submitted SQL statements and writes them to an output file with the bind variables replaced by the actual literals used when the statement was executed.

You can download the C source code from my GitHub repository https://github.com/NormanDunbar/TraceMiner, or download it directly from https://github.com/NormanDunbar/TraceMiner/archive/master.zip and compile it for your own machine. Currently, I know that the utility is happily running on Windows, AIX and Linux.

A number of fixes have been made between version 0.19 and the current, 0.21:

V0.20, which didn’t get released, made the following changes:

  • Data type 96, NCHAR or NVARCHAR, intermittent bug fixed. Sometimes it tried to extract the hex values from the previous line of text, not the current one. Weird! Thankfully, verbose output showed it up.
  • MAXBINDS upped from 50 to 150 – just because!
  • CLOSE of a cursor now gets handled by removing it from the linked list.

V0.21, which is the current release, has these changes:

  • CLOSE of a cursor is now handled differently. A large trace file showed that Oracle can re-parse the same SQL after a cursor has been CLOSEd and only if the cursor id was never used with a different SQL statement. Instead of PARSING IN CURSOR #1234 … followed by the full SQL text, it simply does PARSE #1234 again with no SQL text! That caused the subsequent EXEC #1234 to abort the program as the cursor wasn’t in the linked list – because CLOSE #1234 removed it – and was considered a fatal problem.
  • Data Type 96, NCHAR or NVARCHAR can cause problems as it is possible to output more than one line of hex values. Up until now, I’ve only ever seen one line but this is now fixed. The hex values in the second and subsequent lines are simply ignored. 😉
  • There was a segfault at EOF, only if running in verbose mode and with config.h having set with OFFSETFORRICH = 0. It was when freeing the memory allocated for the SQL statement.
  • The verbose output has been tidied up a bit. Also, if you ever need to extend the MAXBINDS or MAXBINDSIZE, you get a message in the output file, not in the debug file. Just in case you don’t have a debug file!

Enjoy.

TraceMiner2 and TraceAdjust

$
0
0

TraceMiner has been updated and rewritten. Numerous bugs and foibles have been fixed.

TraceAdjust is a new, useful, utility to carry out some pre-processing on a trace file before you have to use your own weary eyes to work thorough potential Oracle performance problems! That’s why I wrote it!

TraceMiner2

TraceMiner has been totally rewritten in C++ rather than vanilla C, which has had the bonus of allowing me to fix some bugs, and do away with the need to recompile whenever you hit a system limit of some kind. You can view the readme.pdf file here.

Source code is available from GitHub as you will need to compile this utility to match your database server, or analysis systems. Tested on Windows and Linux.

TraceAdjust

TraceAdjust is a new utility which massages a tracefile to do the following:

  • Add a decimal point in the tim values, to separate full seconds from micro-seconds. My eyes are too old to do it manually!
  • Add a delta to the end of the trace line to show the difference between the previous tim and the current tim. This saves me doing it with a calculator, and multi-digit tim values!
  • Adds a running delta since the most recent timestamp record in the trace file. Perhaps not as useful as the above, but it works for me.
  • Convert the deltas so far, into actual, locally oriented timestamps showing resolution down to the micro-second.

Having the tracefile show these values already worked out does make life a bit easier when you have to get dirty in the raw traces.

Have a look at the Readme.pdf file here, and, download the source code which, as ever, is available from GitHub as you will need to compile this utility to match your database server, or analysis system. Tested on Windows and Linux.

Compiling Sqlite3 Shell with Embarcadero/Borland C.

$
0
0

I wanted to compile the sqlite3 shell on Windows, using my Free Embarcadero C compiler, but it didn’t work. It was quite easy to fix, but if you are affected, read on.

First Attempt

After unzipping the amalgamated source files a default compilation was attempted with the following command line. The -tCM option simply says to create a console based non-windows application.

bcc32c -o sqlite3.exe shell.c sqlite3.c -tCM

Which results in the following compiler warnings and linker errors:

shell.c:158:3: warning: implicit declaration of function '_setmode' is invalid in C99...
shell.c:7018:26: warning: implicit declaration of function '_isatty' is invalid in C99...
...
Error: Unresolved external '__isatty' referenced from C:\SQLITE3\SHELL-264807.O
Error: Unresolved external '__setmode' referenced from C:\SQLITE3\SHELL-264807.O

The compiler warnings gave me the impression that something called _isatty and _setmode do not exist, or something is not setting up their function definitions correctly. That’s usually the case with the warnings shown anyway.

Because all the warnings are in the file shell.c, that’s what we need to look at.

Fixing the shell.c Source File

Open shell.c in your favourite text editor. Alternatively, use Notepad!

Search for _isatty. You should find it around line 104.

Replace this code:

...
# define isatty(h) _isatty(h)
...

With the following:

...
# if !defined (__BORLANDC__)
# define isatty(h) _isatty(h)
# endif
...

Embarcadero aka Borland C doesn’t have anything named _isatty but it does have isatty and it is the function we need to be calling. Moving on…

Search for _setmode. You should find it around line 158.

Replace this code:

...
_setmode(_fileno(file), _O_BINARY);
...
_setmode(_fileno(file), _O_TEXT);
...

With the following:

...
#if defined (__BORLANDC__)
setmode(_fileno(file), _O_BINARY);
#else
_setmode(_fileno(file), _O_BINARY);
#endif
...
#if defined (__BORLANDC__)
setmode(_fileno(file), _O_TEXT);
#else
_setmode(_fileno(file), _O_TEXT);
#endif
...

Again, Embarcadero C doesn’t have _setmode as it has setmode instead, so that’s what we need to be calling here, twice.

Once the changes have been saved, exit from the editor and recompile.

Second Attempt

The same command line is used:

bcc32c -o sqlite3.exe shell.c sqlite3.c -tCM

Which results in the following:

Embarcadero C++ 7.20 for Win32 Copyright (c) 2012-2016 Embarcadero Technologies, Inc.
shell.c:
sqlite3.c:
Turbo Incremental Link 6.75 Copyright (c) 1997-2016 Embarcadero Technologies, Inc.

No errors, no warnings. Job done, we have a shell!

Testing, Testing

It. Just. Works!

Databases can be created, deleted, opened closed, attached and so on, tables and indexes etc can be created and used. Life is good!


FLAC to MP3 as Easy as Pie!

$
0
0

I have ripped all my music, well most of it, to FLAC for the quality aspect. Sometimes though, I need to convert to MP3 for some of the lesser audio players out there that I might have to use from time to time.

I have recently come across a pretty nifty (Linux) way to do this, without having to cope with having duplicate files in FLAC and MP3 formats on my hard drives.

The utility I’ve discovered is called mp3fs and is a FUSE file system whereby a normal, non-root user, can mount the FLAC folder and see the contents as MP3 files.

Once mounted in this way, the MP3 files can be played by, or copied to, a less well enabled device and will be converted to MP3 on the fly. I don’t then have to have MP3 files clogging up my music hard drives.

Installation

On my Linux Mint 18.2 setup, it’s a simple one liner:

sudo apt-get install mp3fs

Usage

First, create the folder where my FLAC files will appear as MP3 files. I’m calling mine mp3:

mkdir ./mp3

Then mount the FLAC folder on to the new mp3 folder. The FLAC files live in /media/norman/USB_MUSIC and sub-folders below this mount point:

mp3fs -b 192 /media/norman/USB_MUSIC ./mp3

The -b 192 part sets the bit rate for the MP3 output files. Other values are available.

Now, if I do a quick check, I see the following:

ls ./mp3
Benny Andersson  Carole_King  Fleetwood_Mac  Tangerine Dream  Zero Project

ls ./mp3/Tangerine\ Dream/
Quantum_Gate

ls ./mp3/Tangerine\ Dream/Quantum_Gate/
01 - Sensing_Elements.mp3
02 - Roll_the_Seven_Twice.mp3
03 - Granular_Blankets.mp3
04 - It_is_Time_to_Leave_When_Everyone_is_Dancing.mp3
05 - Identify_Proven_Matrix.mp3
06 - Non-Locality_Destination.mp3
07 - Proton_Bonfire.mp3
08 - Tear_Down_the_Grey_Skies.mp3
09 - Genesis_of_Precious_Thoughts.mp3

It’s looking good. Now I can copy my wife’s new CDs from the folders above to the device she wants to play them on, as MP3 files. My FLAC ripped files will be converted to MP3 on the fly as the copy progresses.

Once completed, I can unmount the mp3 folder as follows:

fusermount -u ./mp3

There are numerous options that can be supplied to the mp3fs command, including one to automatically unmount the folder after the file operation has completed. I prefer to manual unmount things as I might want to do other stuff later.

Using FUSE to Mount an SSH Folder Locally

$
0
0

I have recently come across a pretty nifty Linux utility that allows me to mount a remote filesystem on an SSH server, locally and without requiring root privileges to do so. The remote filesystem happens to be where my backups are located, so that’s going to be useful for making and restoring backups!

The utility I’ve discovered is called sshfs and is a FUSE file system whereby a normal, non-root user, can mount the remote folder and see the contents as if they were actually in a local folder.

Once mounted in this way, the remote files can be copied to, from, deleted etc in the normal manner.

Installation

On my Linux Mint 18.2 setup, it’s a simple one liner:

sudo apt-get install sshfs

Usage

First, create the folder where my remote files will appear. I’m calling mine sshfiles:

mkdir ./sshfiles

Then mount the remote folder on to the new sshfiles folder. The backup files live in the norman/backups folder, on a server named wd and the user account I need to login to is my own, norman:

sshfs norman@wd:norman/backups ./sshfiles

Now, if I do a quick check, I see the following:

ls ./sshfiles
Backup_scripts  Downloads       Records         data            
Calibre         Home            SourceCode              

It’s looking good. Now I can copy my local folders to the backup device by copying them locally to the sshfiles folder. The sshfs utility will do the needful in copying them across the network to the correct server.

Once my backups (or restores) are completed, I can unmount the sshfiles folder as follows:

fusermount -u ./sshfiles

Arduino Nano – Cannot Upload Sketches after Board Upgrade to 1.6.21

$
0
0

My Arduino Nano started to refuse to upload sketches after I upgraded the boards library to version 1.6.21 from 1.6.20. All I got was this:

...
avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_getsync() attempt 1 of 10: not in sync: resp=0x00
avrdude: stk500_recv(): programmer is not responding
avrdude: stk500_getsync() attempt 2 of 10: not in sync: resp=0x00
...

Repeat 100 times!

After reverting back to the previous boards library, version 1.6.20, everything worked. I upgraded and everything stopped working.

The solution, from the arcduino.cc forums, is simple:

Tools -> Processor -> ATmega328P (Old Bootloader)

Read the forum thread at https://forum.arduino.cc/index.php?topic=532983.0 if you wish. I’m just noting this here for my own reference and definitely not claiming any credit.

IMPDP Hangs, or Appears to Hang – But Has it?

$
0
0

You know the score, you are running an impdp and it looks to have hung up. You’ve watched the log file (or on screen messages) and it’s sitting at something like:

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

But hasn’t moved from there for what seems hours. The alert log for the database is of no help, as there are no errors or warnings logged there. What’s going on?

Is the import actually running? Check DBA_DATAPUMP_JOBS to find out:

select owner_name, job_name, operation, job_mode
from dba_datapump_jobs 
where state='EXECUTING' ;

Which gives something like:

OWNER_NAME JOB_NAME           OPERATION JOB_MODE
---------- ------------------ --------- --------
DUNBARNOR  SYS_IMPORT_FULL_01 IMPORT    FULL

So, we see at least one full import job is running. Good news. Do we have any sessions running though?

select owner_name, job_name, session_type 
from dba_datapump_sessions;

And we see this:

OWNER_NAME JOB_NAME           SESSION_TYPE
---------- ------------------ -------------
DUNBARNOR  SYS_IMPORT_FULL_01 DBMS_DATAPUMP
DUNBARNOR  SYS_IMPORT_FULL_01 MASTER
DUNBARNOR  SYS_IMPORT_FULL_01 WORKER
DUNBARNOR  SYS_IMPORT_FULL_01 WORKER
DUNBARNOR  SYS_IMPORT_FULL_01 WORKER
DUNBARNOR  SYS_IMPORT_FULL_01 WORKER

So, we have the master session, and a few workers. My job is running with parallel=4 in the parameter file, so that’s why there are 4 workers. Are they actually doing anything?

select v.status, v.sid,v.serial#,io.block_changes,event 
from v$sess_io io, v$session v 
where io.sid = v.sid 
and v.saddr in (
    select saddr 
    from dba_datapump_sessions
) order by sid;
STATUS SID  SERIAL# BLOCK_CHANGES EVENT
------ ---- ------- ------------- --------------------------------------------
ACTIVE 45   27      197679        PX Deq: Execute Reply 
ACTIVE 324  431     5484          wait for unread message on broadcast channel
ACTIVE 614  89      15406         wait for unread message on broadcast channel
ACTIVE 757  105     50130         wait for unread message on broadcast channel
ACTIVE 891  169     77216         wait for unread message on broadcast channel
ACTIVE 1035 59      76471         wait for unread message on broadcast channel

Hmm, looks like nothing is working at all. Every session appears to be waiting for something to be broadcast. The number of BLOCK_CHANGES should be increasing if the job was working correctly, shouldn’t it?

The job has definitely hung.

Or has it?

Well, in this particular case, the clue is on screen, in the log file, and noted above.

The job is Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX so, I’m hoping it’s creating indexes. Can we check? Of course!

For reasons of space across the page, I’ve had to substr() a couple of columns in the following SQL statement. You should refrain from doing so to get the full picture.

select s.sid, s.module, s.state, 
       substr(s.event, 1, 21) as event,
       s.seconds_in_wait as secs, 
       substr(sql.sql_text, 1, 30) as sql_text
from v$session s
join v$sql sql on sql.sql_id = s.sql_id
where s.module like 'Data Pump%'
order by s.module, s.sid;

And now we can see what’s going on, The SQL_TEXT column shows that a number of parallel sessions are indeed creating indexes:

SID   MODULE            STATE    EVENT                  SECS SQL_TEXT
----  ----------------  -------  ---------------------  ---- ------------------------------
614   Data Pump Master  WAITING  wait for unread messa  0    BEGIN :1 := sys.kupc$que_int.r
45    Data Pump Worker  WAITING  PX Deq: Execute Reply  64   CREATE INDEX "DUNBARNOR"."XIE1
192   Data Pump Worker  WAITING  direct path read temp  0    CREATE INDEX "DUNBARNOR"."XIE1
757   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
757   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
757   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
857   Data Pump Worker  WAITING  direct path read temp  0    CREATE INDEX "DUNBARNOR"."XIE1
891   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
891   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
891   Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
1019  Data Pump Worker  WAITING  PX Deq: Execution Msg  64   CREATE INDEX "DUNBARNOR"."XIE1
1035  Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
1035  Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
1035  Data Pump Worker  WAITING  wait for unread messa  1    BEGIN :1 := sys.kupc$que_int.t
1324  Data Pump Worker  WAITING  PX Deq: Execution Msg  64   CREATE INDEX "DUNBARNOR"."XIE1
1749  Data Pump Worker  WAITING  direct path read temp  0    CREATE INDEX "DUNBARNOR"."XIE1
2034  Data Pump Worker  WAITING  PX Deq: Execution Msg  67   CREATE INDEX "DUNBARNOR"."XIE1
2153  Data Pump Worker  WAITING  direct path read temp  0    CREATE INDEX "DUNBARNOR"."XIE1
2177  Data Pump Worker  WAITING  PX Deq: Execution Msg  66   CREATE INDEX "DUNBARNOR"."XIE1

So, the impdp job is still running and is still working, it’s just not importing at the moment, it is building indexes.

Why does it appear hung? This is an exceedingly large table, with far too many indexes for comfort. They all need to be recreated, so this takes time. Repeatedly executing the above query (the non-substr()‘d version I mean) will show the names of the indexes changing every time it completes one index and moves on to the next. You will also see it moving on to a different table name when it has built all the indexes on the currently displayed table.

You will, I hope, also notice a number of SIDs in the above output which are never mentioned in the preceding query results. This is why, I suspect, that there are a lot of hits on the web about the wait event wait for unread message on broadcast channel related to impdp (or expdp) but so far, none of those hits seem to go into any details about the sessions you don’t find in DBA_DATAPUMP_JOBS or DBA_DATAPUMP_SESSIONS, perhaps a quick look in V$SESSION is more helpful when trying to track down suspected hangs in these utilities?

So, it turns out the job was not hung after all. At least, not in this case.

Enjoy.

Snorkelling in the Oracle Listener Logs.

$
0
0

(Snorkelling is not quite as in depth as a “deep dive”!)

Attempting to parse a listener.log will probably bend your brain, but I needed to do it recently to determine which unique servers and/or desktops and/or application servers were still connecting to a database prior to that database going down for maintenance. This was an exercise in confirming that the documentation we have, is correct.

According to the Net Services Administrator’s Guide, there are a number of different message types that can appear in a listener.log:

  • A client connection request.
  • A RELOAD, START, STOP, STATUS or SERVICES command, issued by lsnrctl.

However, I’ve found that a tnsping also logs a message – probably because it’s a connection request, sort of, plus, regular “service update” messages also appear, and finally, error messages.

Each entry in the file consists of up to 6 different fields, as follows:

  • Timestamp.
  • Connect Data.
  • Protocol Information (optional).
  • Event.
  • SID or SERVICE (optional).
  • Result code.

The fields are, as mentioned, separated by asterisks. It’s very nice of Oracle to mention this in the documentation, but, actually scanning the file has shown that there can be more, or less! More on that later.

In the following, servers, databases and IP addresses have been obfuscated to protect the innocent, me! Even the dates and times are somewhat fictitious.

Connection Requests

Connection requests come in two types, successful and failed. Both have, as far as my listener logs are concerned, the full complement of 6 fields:

15-MAY-2018 10:34:44 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=35405)) * establish * ORCL_RW * 0

A failed connection request is normally followed by an error message.

14-JUN-2018 10:07:37 * (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=root))(SERVICE_NAME=ORCL_RO)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=34082)) * establish * ORCL_RO * 12514
TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

Lsnrctl Commands

These have a reduced complement of fields, only 4.

14-JUN-2018 10:14:34 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=my_server)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=202375680)) * status * 0

Tnsping Requests

These have 3 fields, separated by asterisks.

15-MAY-2018 10:36:43 * ping * 0

Service Updates

These also have 3 fields, separated by asterisks.

15-MAY-2018 10:34:50 * service_update * pnet01p1 * 0

Reload Requests

These have 4 fields, and the following is copied directly from the docs as I’m not allowed to carry out a reload on my listeners!

14-MAY-2009 00:29:54 * (connect_data=(cid=(program=)(host=sales-server)(user=jdoe))(command=reload) (arguments=64)(service=listener)(version=135290880)) * reload * 0

Service Registration

These have 3 fields, and the following is copied directly from the docs as all my databases registered before the start of the listener log!

14-MAY-2009 15:28:43 * service_register * sales * 0

Service Died

These have 3 fields, and the following is copied directly from the docs as none of my services have died, yet!

14-MAY-2009 15:51:26 * service_died * sales * 12537

Error Messages

These normally follow on from a failed connection request, or command, and only have a single field:

TNS-12514: TNS:listener does not currently know of service requested in connect descriptor

Timestamp Messages

It appears also that there can be numerous Timestamp lines in the listener.log, these too consist of a single field.

Fri Jun 01 14:15:30 2018

Parsing the Listener Log

Enough background, moving on…

I’m using awk to process the log files, it works, I can tell it to use an asterisk as the field separator and so on. I’m not even using any of the special GNU gawk extensions as I don’t have gawk on this server.

In this exercise, I’m really only interested in connection attempts, and they have (or should have) 6 fields separated by ‘*’ characters. However, being a suspicious type, I better check:

awk -F* '{print NF;}' /u01/listener.full.log | sort -n -u

0
1
3
4
6
8
10

Hmm, looks slightly unpromising. Lets extract all those different line types to separate files,. If your listener.log is as big as mine, that exercise took a while! Don’t bother with the zeros, they are the blank lines.

for x in 1 3 4 6 8 10
do
    awk -F* -v XXX=${x} '{if (NF == XXX){print $0;}}' /u01/listener.full.log > /u01/listener.${x}.log
done

ls -l /u01/listener*

  65013791 Jun 14 11:06 /u01/listener.1.log
    739825 Jun 14 11:07 /u01/listener.3.log
  45683981 Jun 14 11:07 /u01/listener.4.log
2288202297 Jun 14 11:08 /u01/listener.6.log
       342 Jun 14 11:08 /u01/listener.8.log
       382 Jun 14 11:08 /u01/listener.10.log
2399640823 Jun 14 10:23 /u01/listener.full.log

The listing above is slightly edited for ordering and space purposes. It shows just the size in bytes, the date/time and the file name.

  • /u01/listener.1.log is the full list of error messages and timestamp records. Nothing to see here!
  • /u01/listener.3.log is the list of tnsping type messages, version messages etc. Nothing to see here either!
  • /u01/listener.4.log is a list of “service update” events, plus the lsnrctl status or lsnrctl services commands.
  • /u01/listener.6.log is a list of the connection requests, successful or failed. This is what I’m interested in.
  • /u01/listener.8.log is a list of corruptions, possibly caused by the listener being briefly unavailable. The format of this file seems to be two entries amalgamated into one. Best avoided!
  • /u01/listener.10.log is a similar problem to listener.8.log above. Another set of corruptions.

In order to whittle down the amount of data I ‘m scanning, I have decided to extract only those rows with 6 fields, and which have a zero response code. These are the successful connection attempts. This is what I’m trying to gather figures for.

I will use the following script to extract the data from my own listener.6.log file, which is the extract of the 6 field records from the full listener log. However, the script still checks – in case I wish to use it on the listener.log itself.

I’m only keeping the Connect Data, Protocol Data and the SID/Service as I don’t need the rest.

#! /usr/bin/awk -f
#
# Parses the listener.log file (or whatever comes in on stdin) to
# find any line with 6 fields. These will be:
#
# $1 Date and time. Unwanted.
# $2 Connect Data.
# $3 Address data including host from where the request came from.
# $4 Usually "establish". Unwanted.
# $5 The service name connecting to. Kept, just in case.
# $6 The response code. 0 is good. We only want zero.
#
# The output will be only those fields we want from any connection request that worked.
#
#
# USAGE:
#
# cat listener.log | awk -f extract_listener.awk > listener.temp.log
#
# OR:
#
# awk -f extract_listener.awk < listener.log > listener.temp.log
#
# OR:
#
# ./extract_listener.awk < listener.log > listener.temp.log
#
# Norman Dunbar
# 07/04/2018.
#

# This happens before the start of the file.
BEGIN{
    # Set  the incoming field separator.
    FS="*";

    # Set  the output fields separator too.
    OFS="*";
}

# This happens for every record in the file.
{
    # Only interested in records with 6 fields...
    if (NF == 6) {
        # And of those, only successful connection requests.
        if ($6 == "0") {
            print $2, $3, $5;
        }
    }
}

I’m running this as follows:

./extract_listener.awk < /u01/listener.6.log > /u01/listener.temp.log

The output file looks vaguely like this:

 (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVER=DEDICATED)(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=43511)) * ORCL_RW
 (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=47111)) * ORCL_RW
 (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVER=DEDICATED)(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=48619)) * ORCL_RW
 (CONNECT_DATA=(CID=(PROGRAM=JDBC Thin Client)(HOST=__jdbc__)(USER=norman))(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=40722)) * ORCL_RW
 (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=47113)) * ORCL_RW
 (CONNECT_DATA=(CID=(PROGRAM=)(HOST=__jdbc__)(USER=))(SERVICE_NAME=ORCL_RW)) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=47112)) * ORCL_RW

What I’m after is a list of unique programs, hosts and users from the connect data field, plus the host they came from on the protocol information field. I tried a couple of different script methods before finding a reasonably good workable one. The processing is:

  • Use the ‘(‘ as a field separator for the input file.
  • Use the ‘=’ as a field separator for the output.
  • Look for the text “CID=” in all fields.
  • If “CID=” is found, then the desired program is in the next field, host in the one after and user in the one after that. The host (from the protocol data) is in the second last field.

Here’s the script, which has the extremely meaningful name of a.awk!

#! /usr/bin/awk -f
#
# Parses whatever comes in on stdin (from extract_listener.awk) to extract 
# the PROGRAM, HOST and USER from the CONNECT DATA plus the HOST from the PROTOCOL
# DATA of a listener log file.
#
# We need to find "CID=" in any field, then output the following three fields -
# PROGRAM, HOST and USER, plus the second to last field - HOST.
#
# The input file fields are separated by '(' which will need double escapes.
# The output fields will be separated by '='.
#
# It appears that on AIX, printing "expression fields" doesn't work. :-(
#
# USAGE:
#
# cat listener.temp.log | awk -f a.awk > listener.a.log
#
# OR:
#
# awk -f a.awk < listener.temp.log > listener.a.log
#
# OR:
#
# ./a.awk < listener.temp.log > listener.a.log
#
# Norman Dunbar
# 07/04/2018.
#
BEGIN {
   FS="\\(";
   OFS="=";
}

{
    # Count fields minus 1.
    myNF = NF - 1;

    # Find CID= ...
    for (x = 1; x < NF; x++) {
       if ($x == "CID=") break;
    }

    if (x != NF) {
        program = x+1;
        host = x+2;
        user=x+3;
        print $program, $host, $user, $myNF;
    } else {
      print "CID= Not Found\n";
      print $0;
    }
}

I did try to print $x+1, $x+2, $x+3, $NF-1 but all I got was those digits! I'm using an AIX version of awk so that might account for my difficulties! The output looks like this:

PROGRAM=)=HOST=__jdbc__)=USER=))=HOST=172.20.238.35)
PROGRAM=JDBC Thin Client)=HOST=__jdbc__)=USER=norman))=HOST=192.168.1.20)
PROGRAM=JDBC Thin Client)=HOST=__jdbc__)=USER=briansmith))=HOST=192.168.10.40)
PROGRAM=JDBC Thin Client)=HOST=__jdbc__)=USER=webonline))=HOST=192.168.10.62)

Next, I need to process this and extract a csv file consisting on just the 4 data items I need - program, host, user and from host. The excellently named b.awk does this for me.

#! /usr/bin/awk -f
#
# Parses whatever comes in on stdin (from a.awk) to convert the input
# PROGRAM, HOST, USER and HOST (another host) into a CSV file.
#
# The input file fields are separated by '='.
# The output fields will be separated by ','. (It's CSV after all!)
#
# USAGE:
#
# cat listener.a.log | awk -f b.awk > listener.b.csv
#
# OR:
#
# awk -f b.awk < listener.a.log > listener.b.csv
#
# OR:
#
# ./b.awk < listener.a.log > listener.b.csv
#
# Norman Dunbar
# 07/04/2018.
#

BEGIN {
   FS="=";
   OFS=",";
}

# We get $2 = PROGRAM + ')'
#        $4 = HOST + ')'
#        $6 = USER + '))'
#        $8 = Calling Host + ')'
#
# Need to slice and dice. Oh, wrap the program in quotes in case "+ASM" comes up
# as this foxes Excel when imported! It thinks it's a formula because of the leading '+'.

{
   print "\"" substr($2,1,length($2) - 1) "\"",
         substr($4, 1, length($4) - 1),
         substr($6, 1, length($6) - 2),
         substr($8, 1, length($8) - 1);
}

That gives me this:

"",__jdbc__,,172.20.238.35
"JDBC Thin Client",__jdbc__,norman,192.168.1.20
"JDBC Thin Client",__jdbc__,briansmith,192.168.10.40
"JDBC Thin Client",__jdbc__,webonline,192.168.10.62

Now all I need to do is replace the empty programs with "Unknown Program" and any empty users with "Unknown User" and sort into a unique output, so here's the sed file I'm using to make the edits. It goes by the name of sed.file (I do like meaningful names!):

s/^"",/"Unknown Program",/
s/,,/,Unknown User,/
s/))) //

The last line is for those weird entries which have a very strange username! This minor edit gives me the following:

"Unknown Program",__jdbc__,Unknown User,172.20.238.35
"JDBC Thin Client",__jdbc__,norman,192.168.1.20
"JDBC Thin Client",__jdbc__,briansmith,192.168.10.40
"JDBC Thin Client",__jdbc__,webonline,192.168.10.62

So, finally, I can string it all together and sort the output into unique records, add a few headings and we are done:

./extract_listener.awk < listener.log |\
./a.awk |\
./b.awk |\
sed -f sed.file |\
sort -u > temp.csv

I have a headings file as follows:

PROGRAM,HOST,USERNAME,CALLING_HOST

Which I prepend to the above output, to get the final, desired csv file:

cat headings temp.csv > listener.csv

I now have the required listing of all the different programs in use to connect to the database, who connected and from which server. It makes for interesting reading, and surprisingly, matches the documentation!

Summary

There's lots of interesting information in the listener log files, some of it useful, some of it not so useful. Awk is a pretty decent way of mining the file for the information you need, and, even better, you don't have to write perl scripts either! At least with awk, you can read it again after 6 months, and still understand it.

Enjoy.

Generate Entity Relationship Diagrams from a SQL Script.

$
0
0

Sometimes, just occasionally, you find yourself as a DBA on a site where, for some strange and unknown reason, you don’t have an Entity Relationship Diagram (ERD) for the database that you are working on. You could use a tool such as Toad, or SQL*Plus (or even, SQL Developer – if you must) to generate a list of referential integrity constraints. There has to be a better way.

The problem with lists is, they are just words. And they do say that a picture is worth a thousands words, so lets do pictures.

Graphviz

Graphviz is a set of tools for visualising graphs (that’s directed or undirected graphs as opposed to Cartesian graphs by the way) and the source files for the utility are simple text files. So, can we generate an ERD in text format and have graphviz convert it to an image? Of course we can. But first, get thee hence to https://graphviz.gitlab.io/download/ and download the utility for your particular system – it runs cross platform and is free.

Generating Source

The following query will generate a list of parent -> child lines in the output, that show the relationship between any pair of tables, given a suitable starting owner and table_name.

-------------------------------------------------------------
-- Generate a simple "dot" file to be processed with GraphViz
-- to create an image showing the referential integrity
-- around a single table.
--
-- Norman Dunbar
-------------------------------------------------------------
set lines 2000 trimspool on trimout on
set pages 2000
set echo off
set feed off
set verify off
set timing off
set head off

accept owner_name prompt "Enter owner: "
accept table_name prompt "Enter table name: "

spool refint.dot

select '// dot -Tpdf -o this_file.pdf this_file.dot' || chr(10) || chr(10) ||
'digraph refInt {' || chr(10) ||
' splines=ortho' || chr(10) ||
' size=8.25' || chr(10) ||
' label="Referential Integrity around &&TABLE_NAME table.";' || chr(10) ||
' rankdir=LR;' || chr(10) ||
' edge [color=blue4, arrowhead=crow];' || chr(10) || chr(10) ||
' // &&TABLE_NAME is the starting table.' || chr(10) ||
' "&&TABLE_NAME" [shape=box, style=filled, color=blue4 fillcolor=cornflowerblue];' || chr(10) || chr(10) ||
' // The remaining nodes are this style.' || chr(10) ||
' node [shape=box, style=filled, color=dodgerblue2, fillcolor=aliceblue];' || chr(10) || chr(10) ||
' // These are the parent -> child edges.' || chr(10)
from dual;

with refint as (
    select constraint_name, table_name, r_constraint_name
    from dba_constraints
    where constraint_type = 'R'
    and owner = upper('&&owner_name')
),
primekey as (
    select constraint_name, table_name
    from dba_constraints
    where constraint_type in ( 'P', 'U')
    and owner = upper('&&owner_name')
),
links (child_table, f_key, parent_table, p_key) as (
    select refint.table_name, refint.constraint_name, primekey.table_name, refint.r_constraint_name
    from primekey join refint on primekey.constraint_name = refint.r_constraint_name
)
select distinct chr(9) || '"' || links.parent_table || '" -> "' || links.child_table || '";' as dot
--select distinct level, links.*
from links
start with links.parent_table = upper('&&TABLE_NAME')
connect by nocycle links.child_table = prior links.parent_table
order by 1
--order by parent_table, child_table, f_key
;

select '}' from dual;

spool off

The output looks remarkably similar to the following example:

// dot -Tpdf -o this_file.pdf this_file.dot

digraph refInt {
    splines=ortho
    size=8.25
    label="Referential Integrity around ORDER_ITEM table.";
    rankdir=LR;
    edge [color=blue4, arrowhead=crow];

    // ORDER_ITEM is the starting table.
    "ORDER_ITEM" [shape=box, style=filled, color=blue4 fillcolor=cornflowerblue];

    // The remaining nodes are this style.
    node [shape=box, style=filled, color=dodgerblue2, fillcolor=aliceblue];

    // These are the parent -> child edges.
    "ADDRRESS" -> "CUSTOMER";
    "COLLECTION_METHOD" -> "ORDER_ITEM";
    "COMPENSATION_LEVEL" -> "ORDER_ITEM";
    "CUSTOMER" -> "CUSTOMER_ORDER";
    "CUSTOMER" -> "ORDER_ITEM";
    "CUSTOMER_GRP" -> "CUSTOMER";
    "CUSTOMER_GRP" -> "PRICING_BAND";
    "CUSTOMER_ORDER" -> "DISCOUNT_USAGE";
    "CUSTOMER_ORDER" -> "ORDER_ITEM";
    "DISCOUNT_PLAN" -> "DISCOUNT_USAGE";
    "DISCOUNT_USAGE" -> "CUSTOMER";
    "ORDER_ORIGIN" -> "ORDER_ITEM";
    "ORDER_ITEM" -> "EXTRAS";
    "ORDER_ITEM" -> "ADJUSTMENTS";
    "ORDER_ITEM" -> "ORDER_ITEM";
    "ORDER_ITEM" -> "REFUND_PAYMENTS";
    "ORDER_ITEM_STAT" -> "ORDER_ITEM";
    "ORDER_STATUS" -> "CUSTOMER_ORDER";
    "PRICING_BAND" -> "WEIGHT_RANGE";
    "WEIGHT_RANGE" -> "ORDER_ITEM";
}

Generating the Image

The first line of the output file, refint.dot, shows the command line required to create a PDF version of the ERD. You can specify PNG, JPG, etc as desired. SVG is good for images that need to be scalable (and is usually the better quality output). To generate an SVG image, run the following command line:

dot -Tsvg -o refint.svg refint.dot

A file by the name of refint.svg will be created. And it looks like the following, for this particular example.

Referential Integrity Diagram
Referential Integrity Diagram

(OK, that’s actually a PNG file as WordPress tells me that I cannot upload SVG files, for security reasons.)

Caveats

The diagram is only as good as the referential integrity in your target database. This much should be obvious – if there are no referential integrity constraints, then all you will get is a single entity. If that’s the case, I’d be looking for another job as I suspect that all the required checking is being done in the application, rather than in the database – best avoided!

And finally, you will not generate a full schema ERD with this code, but it’s handy for stuff around and about a particular table.

Enjoy.

RMAN-20033: control file SEQUENCE# too low

$
0
0

Have you ever seen the error RMAN-20033: control file SEQUENCE# too low and wondered what could be causing it? 

If you look on MOS, you will probably see that the error is caused by the control file in use is older than the one that was most recently used to synchronise the RMAN catalogue and that you should either recreate the database control file(s), or, delete the database from the catalog and add it in again.

Think again! The problem could be caused by the fact that there are two backups attempting to run at the same time, and both need to synchronise with the control file.

This is especially true if you find that while the error is reproducible, it is intermittent in nature – it doesn’t fail every time, which it would if the control file was really to blame.


ENQ: TS – Contention

$
0
0

Thanks to http://www.dbaglobe.com/2010/08/drop-temporary-tablespace-hang-with-enq.html it was a simple matter to resolve the above enqueue wait on an attempt to drop a previously default temporary tablespace.

The session causing the problem was a DBSNMP session being run by the OEM agent on the server. The following script, from the above blog, allowed me to identify the session and sort out getting it ‘removed’ to allow the drop to continue.

SELECT 
se.username username, se.SID sid,
se.serial# serial#, se.status status,
--se.sql_hash_value, --se.prev_hash_value,
se.machine machine, su.TABLESPACE tablespace,
--su.segtype, --su.CONTENTS CONTENTS FROM
v$session se, v$sort_usage su WHERE
se.saddr=su.session_addr;

I’ve commented out a few of the columns that I’m not interested in at this point, but maybe another time….

USERNAME SID SERIAL# STATUS  MACHINE            TABLESPACE
-------- --- ------- ------- ------------------ ---------- DBSNMP 595 8273 INACTIVE myserver.mydomain NORMS_TEMP

The sid and serial# were then used to remove the session and allow the tablespace to be dropped.

The script above is a general purpose “who is using temp space at the moment” query, and has been added to my arsenal.

Useful AIX Unix Commands

$
0
0
proctree The proctree command acts like the Linux pstree and displays the hierarchy of processes from a given starting process id. View Large Files If, when you view a large file you get an errors about it being too big, try the following: echo "set ll=3501720 dir=/tmp" >> ~/.exrc That will allow you to read … Continue reading Useful AIX Unix Commands

Oracle Deadlock Analysis

$
0
0

There’s a new utility to assist in diagnosing the underlying cause of Oracle deadlocks. Interested?

You can download source code as well as binaries for Linux and Windows at https://github.com/NormanDunbar/DeadlockAnalysys/releases.

All you have to do is execute the DeadlockAnalysis utility with a list of Oracle trace files on the command line. Each trace file will get its own report, in HTML format, in the same directory as the trace file.

The report files use a CSS style sheet to format the output. This file can be edited to use your own installation style, if necessary, and if the style sheet exists in the output directory when the utility is executed, it will not be overwritten.

That’s about it really! There’s not much more to say except, enjoy! (Ok, one last thing, yes, I do wish I had managed to spell “analysis” correctly when setting up the github repository. Sigh.)

ATMega328P 8MHz on a Breadboard

$
0
0

I had an urgent need (!) to build a breadboard version of an Arduino board which I needed to run without the 16MHz crystal and the two 22pF capacitors used by most Arduino boards.

The following steps are what I had to do, as my brand new ATmega328P micro-controller came supplied with an Arduino Uno boot-loader installed. I didn’t want that one because it depends on having the 16MHz crystal and that takes up two of the I/O pins, which I might have a good use for, plus I can run it at 3.5V rather than 5V and get extra long life when running off of batteries.

I had to first download the 8MHz configuration files for the Arduino IDE. These were obtained as a zip file from https://www.arduino.cc/en/uploads/Tutorial/breadboard-1-6-x.zip. (My Arduino IDE version is 1.8.6 but this download worked fine.)

The file was copied into my default ‘sketchbook’ location, which can be found within the Arduino IDE’s preferences dialogue. Mine was /home/norman/Arduino.

I then had to create a folder named hardware within my default sketchbook location, and copy the downloaded zip file into that new folder.

Finally, a quick shutdown and restart of the Arduino IDE and I had, on the tools->boards option, a new Arduino Board, right at the bottom for a bread-boarded 8MHz Arduino with an ATMega328P micro-controller. So far so good.

I built up the circuit on my breadboard, and as the ATMega328P was already configured, and fused, for a 16 MHz crystal, I had to fit one, plus capacitors to get it to work.

The next step should have been simply to select the board as the new one, and burn a boot-loader with the USBTiny programmer. However, that refused to work and gave me messages that the device signature was incorrect and it certainly looked that way as it came back all zeros – not good.

At this point, you should imagine me trying all sorts of fiddles, some blindfold, to get it working – all to no avail. So, eventually, I went back to basics.

I changed the board back to an Uno, and attempted to burn the bootloader – that worked. Hooray!

I uploaded the famous ‘blink’ sketch to the pseudo Uno. That worked fine.

Then, I chose the new board setting again, and tried once more to burn the boot-loader, and, surprise, surprise, that worked now! Now we are cooking!

So, if it was running at 8MHz, I should be able to pull out the crystal and its two load capacitor friends, and it should still work (It was running the ubiquitous ‘blink’ sketch at this point) … drum roll please ….

It worked! I now have a perfectly working 8MHz ATMega328P running on a breadboard, for now, without losing the two pins taken up by the crystal. Job done!

Anyone Interested in the schematics? Here you are:


NormDuino v1.0.0

A few words about the above:

  • The AREF pin is not directly connected to VCC or any other external reference voltage. It goes through a jumper so that it can be connected and disconnected to and from VCC. You do not want a voltage on AREF if you decide to use the internally generated 1.1V reference voltage for the ADC and/or Analogue Comparator. If you do, it lets the magic blue smoke out and the device stops working.
  • The 100nF capacitor between AREF and GND is, however, always required, regardless of whether or not AREF itself is connected to any external reference voltage.
  • There also a jumper on the main voltage supply. This is so that it can be powered from batteries, or, from an FTDI Programmer which can supply the 5V required to power the AVR while being programmed, or afterwards.
  • C4, C5 and XTAL1 are there in case you want to upgrade to an Arduino Uno again and run the device at 5V and 16MHz instead of 3.5V and 8MHz. However, in the configuration I have, they are not required and can be omitted – after you burn the 8MHz boot-loader of course.
  • As you can see, I created the schematics with Fritzing. I quite like it for a quick and dirty one-off design, but it does drive me mad when I start routing PCBs or trying to get components adjusted to fit the bread-board layout. (It also, sometimes moves captions around – take note of R1’s caption – it’s not where it was in the original schematic!)

Build 64bit OCILIB Libraries for CodeBlocks

$
0
0

I tend to compile with gcc, in a bash session, on Windows 7. I use Code::Blocks as my IDE of choice and one of my projects, well, quite a few, use the excellent OCILIB library for accessing Oracle databases, by Vincent Rogier. I can’t recommend this library highly enough.

However, it comes with a Code::Blocks project file to build 32 bit libraries, but I need 64 bits. Here’s how I do it.

None of the following is needed if you use something like Visual C as your compiler. I don’t!

Download the source code

Got to https://github.com/vrogier/ocilib/releases and grab the latest release. Watch out for version 4.6.0 as it will not compile on a system with an Oracle 12.1 client – but I have a fix. Version 4.6.1 (when it’s available) has the fix built in.

You need the file ocilib-x.x.x-windows.zip, so download it to wherever you keep your source files, and unzip it.

Amend the Supplied Project File

There is a project file called proj\mingw\ocilib_static_lib_mingw.cbp, so open that in Code::Blocks. You will notice that it has two options – ‘Release – ANSI’ and ‘Release – UNICODE’. I use the ANSI version, but the following gives instructions for both.

The project as supplied uses the built in compiler which is a 32 bit only, version of gcc. My system has a separately installed version that compiles 32 and 64 bit applications. So I need to change the existing targets to use my compiler instead of the built in one.

Change the Default Compiler

Go to Project -> Build Options.

Select the top level ocilib_static_lib_mingw option on the left. Do not select either of the two release targets at this stage.

The default, built in 32 bit, compiler is named ‘GNU GCC Compiler’ I need to change this to ‘GNU GCC Compiler 32/64bit (TDM-GCC-64)’ which is the name my separately installed compiler is called. If you are prompted to save the settings, choose ‘Yes’ then ‘OK’ on all the remaining dialogues that pop-up. Finally, ‘OK’ your way back to the main screen.

The compiler has now been changed for all targets currently defined. Now, to add the new 64 bit targets.

Add 64bit Targets.

Go to Project -> Properties and select the ‘Build targets’ tab.

Select ‘Release – ANSI’ on the left and click the ‘Duplicate’ button, change the name to ‘Release 64bit – ANSI’ and ‘OK’.

Select the new target, if not already selected.

Under the ‘Output filename’ option, towards the right side, change the ‘lib32’ to ‘lib64’. Everything else remains the same.

Now repeat the above to create the ‘Release 64bit – UNICODE’ target, and ‘OK’ back to the main screen.

That now gives us 4 targets, but they are all 32 bit at present. We now need to change the compiler options.

Change the Compiler Options

Go to Project -> Build Options and select the very top of the tree – ocilib_static_lib_mingw. Do not select any of the sub-targets, yet.

On the ‘Compiler settings’ tab, scroll down the options list and make sure that both ‘Target X86 (32bit)’ and ‘Target X86 (64bit)’ are unselected.

Click ‘Release – ANSI’ on the left, and select ‘Target X86 (32bit)’.

Click ‘Release – UNICODE’ on the left, and select ‘Target X86 (32bit)’. Select ‘yes’ if prompted to save changes.

Click ‘Release 64bit – ANSI’ on the left, and select ‘Target X86 (64bit)’. Select ‘yes’ if prompted to save changes.

Click ‘Release 64bit – UNICODE’ on the left, and select ‘Target X86 (64bit)’. Select ‘yes’ if prompted to save changes.

Now ‘OK’ back to the main screen.

Build the Libraries

Now that we have 4 targets, it’s time to build the different libraries. In the main screen, select each of the 4 targets in turn, and then select Build -> Rebuild to make sure that any existing versions are rebuilt with the new compiler and options.

When done, there should be two new files in each of the lib32 and lib64 folders, these will be named libociliba.a and libocilibw.a.

Fix 4.6.0 Errors

If you get an error about OCI_ATTR_COL_PROPERTY_IS_CONID then you need to apply the following fix if release 4.6.1 is not yet available.

Edit the file src/column.c and change the code to the following at lines 260 onwards – there’s only two lines to add.

  259             if (value & OCI_ATTR_COL_PROPERTY_IS_GEN_BY_DEF_ON_NULL)
260 {
261 col->props |= OCI_CPF_IS_GEN_BY_DEFAULT_ON_NULL;
262 }
263
264 #if OCI_VERSION_COMPILE >= OCI_18_1
265
266 if (value & OCI_ATTR_COL_PROPERTY_IS_LPART)
267 {
268 col->props |= OCI_CPF_IS_LPART;
269 }
270
271 if (value & OCI_ATTR_COL_PROPERTY_IS_CONID)
272 {
273 col->props |= OCI_CPF_IS_CONID;
274 }
275 #endif
276
277 }
278 }

All that is required is to add the two lines numbered 264 and 275 above, but not the line numbers please! You should now be able to compile the code.

Job done, I can now build 32 and 64 bit versions of my applications to access the databases.

Arduino Internal Temperature Measuring

$
0
0

The code below, somewhere, shows how to measure the temperature of the ATmega328 microcontroller built in to numerous Arduino boards. You can find all the gory details in my new book Arduino Software Internals available from Apress, Amazon and good bookshops everywhere.

A complete guide to how the Arduino Language works, and how it makes the hardware work.

Apress.com: https://www.apress.com/gb/book/9781484257890

Amazon.co.uk: https://www.amazon.co.uk/Arduino-Software-Internals-Complete-Language/dp/1484257898/

The Warning

Do not use this code if your Arduino has the AREF pin connected to any voltage source. When you enable the 1.1v bandgap voltage as the ADC reference, it goes Bzzzt! And all the magic blue smoke gets out! The only thing connected to AREF should be a 100nF capacitor, to ground. (Already built in on Arduino boards.)

The Listing

/*
 * ARDIUNO - measure the internal temperature of the 
 * AVR ATmega328P using the ADC internal temperature 
 * input. See the data sheet for details.
 * 
 * (C) Norman Dunbar, July 21 2018.
 *
 * This code is in the public domain - use it and abuse
 * it as you wish! It is based, but not copied, from
 * an Atmel example found at:
 *
 * https://microchipdeveloper.com/8avr:avradc
 *
 */

void setup() {
    // Initialise the ADC to use the
    // internal 1.1V reference voltage. 
    ADMUX = (1 << REFS0) | (1 << REFS1);
    
    // Use the ADC multiplexer input
    // number 8, the temperature sensor.
    ADMUX |= (1 << MUX3);
    
    // Slow the ADC clock down to 125 KHz
    // by dividing by 128. Assumes that the
    // standard Arduino 16 MHz clock is in use.
    ADCSRA = (1 << ADPS2) | (1 << ADPS1) | (1 << ADPS0);

    // Non-standard 8MHz clock in use.
    //ADCSRA = (1 << ADPS2) | (1 << ADPS1) | (0 << ADPS0);
    
    // Enable the ADC and discard the first reading as
    // it is always 351 on my device.
    ADCSRA |= (1 << ADEN) | (1 << ADSC);
    (void)readADC();
    
    // Use the Serial monitor for output.
    Serial.begin(9600);
    Serial.println("Arduino Internal Temperature");
}


// Read the ADC result from the most recent conversion and
// start another before ruturning the current reading.
uint16_t readADC() {
    // Make sure the most recent ADC read is complete.
    while ((ADCSRA & (1<<ADSC))) {
        ; // Just wait for ADC to finish.
    }
    
    uint16_t result = ADCW;

    // Initiate another reading.
    ADCSRA |= (1 << ADSC);

    return result;
}


//-----------------------------------------------------------------
// There are many ways, it seems, to calculate the degrees C from
// the ADC. Read the chapter on the ADC to find out where they come
// from. Here are some:
//
// ADC - some random offset;
// (ADC - 247)/1.22;
// ADC - 273;
// (((ADC - (273 - 100 - TS_OFFSET)) * 128) / TS_GAIN) + 25.
// (ADC - 324.31) / 1.22
//
// I'm using the last one, as it's the one closest to my actual
// temperature measurements.
//-----------------------------------------------------------------


void loop() {

    // Running average of the ADC Readings for
    // better accuracy.
    uint32_t ADCTotal = 0;
    float ADCAverage = 0.0;
    uint16_t ADCReading = readADC();

    for (uint8_t x = 1; x < 101; x++) {
        ADCTotal += ADCReading;
        ADCAverage = (float)ADCTotal / (float)x;

        // Uncomment if you want a running commentary!
        /*
        Serial.print("ADC = ");
        Serial.print(ADCReading);
        Serial.print(" ");
        Serial.print("ADCTotal = ");
        Serial.print(ADCTotal);
        Serial.print(" ");
        Serial.print("ADCAverage = ");
        Serial.println(ADCAverage);
        */

        ADCReading = readADC();
    }
    
    // Print the ADC temperature.    
    float degreesC = (ADCAverage - 324.31) / 1.22;
    Serial.print(degreesC);
    Serial.print("C, ");
    
    // Convert to Fahrenheit. C * 1.8 + 32.
    Serial.print(degreesC * 1.8 + 32);
    Serial.println("F.");
    
    // Delay a second more between readings.
    delay(1000);
}

Backing Up My Books

$
0
0

I have a lot of eBooks on my tablet and phone. I also have a backup on my Linux laptop. In the past I have often just plugged the tablet into the laptop, opened the appropriate MTP folder, selected everything and copied them all to the backup, overwriting the previous backup. There has to be a better way, surely? Step up rsync.

The problem is the MTP mount, however, I found a way to make it work. Those mounts, mainly from Android devices, usually open the the appropriate file explorer application in the laptop. But, they are mounted — somewhere — and with a bit of digging, we can find the mount point, and from that, run rsync to do the backup of the files that are new or updated only.

Finding the Mount Point

The first problem, is where on earth has the Android device been mounted?

$ gio mount -li | grep -i activation_root |\
  cut --delimiter== --fields=2 |\
  cut --delimiter="/" --fields=3

SAMSUNG_SAMSUNG_Android_R52M70B1KWE/

So, that’s the mount point sorted. Well, part of it. I need to know where it’s mounted. This turned out to be:

/run/user/1000/gvfs/mtp:host=SAMSUNG_SAMSUNG_Android_R52M70B1KWE/

where 1000 is my user id. That colon will need escaping too, I suspect! Now, how to work that out in code in case it ever changes?

Finding my User Id

This is easy enough, the file /etc/passwd holds details of my user id in the third field, fields are separated by colons, so, cut to the rescue!

$ grep ^$USER /etc/passwd |\
  cut --delimiter=: --fields=3

1000 

That was simple. So, this should work then? I’m using the short form of the cut parameters here by the way. Less typing! I’m also using the $(...) form of extracting a result from a command, WordPress has stopped me using backticks for some reason!

$ ## Where the backups live.
$ cd ~/Backups/Tablet/

$ ## Get my user id,
$ USER_ID=$(grep ^$USER /etc/passwd | cut -d: -f3)

$ ## Get the MTP mounted device.
$ DEVICE=$(gio mount -li | grep -i activation_root |  cut -d= -f2 | cut -d/ -f3)

$ ## Get the actual mount point for the MTP device.
$ MOUNT=/run/user/${USER_ID}/gvfs/mtp\:host\=${DEVICE}/Tablet/My_Books

$ ## Run the backup from MTP to Backups/Tablet/My_Books/
$ ## Trailing slash on "$MOUNT" is important here.
$ rsync $MOUNT/ --archive --recursive --verbose My_Books/

And, indeed it does. I did have a problem though. My_Books used to be named My Books. Neither the ls command, nor rsync could see it on an MTP mount point, for some unknown reason. The file manager could, and allowed manual copying. However, a quick rename and all was well.

So, now I’ve built the above into a shell script and saved it in $HOME/bin/backup_mybooks.sh for future use.