by Jim Alton
Integrating the desktop
Open DataBase Connectivity (ODBC) isn’t
new technology, but it has been fairly slow to gain mainstream acceptance.
Allowing PC’s to access your corporate data directly is not something to
be taken lightly. But what is this ODBC thing? Part science, part black
Not knowing the answer to the above
question, I set out on a small quest. The folks at Minisoft and Brant
Technologies, their Canadian representative, were kind enough to lend me a
copy of their ODBC solution for the HP 3000. And the journey began…
Installation of the software really was a
snap. The entire package comes bundled in a PC file that needs to be
extracted, and then portions must be uploaded to the HP 3000. It made
sense to me: a client portion and a server portion to the system.
I had originally thought the PC portion of
ODBC would be standard for any server platform. In hindsight that was
terribly naive. There are a number of items specific to TurboImage that
must be passed to the server. One of the pleasant surprises was the
Minisoft support for Omnidex indexes (real programmers don’t read
documentation anyway, right?).
The HP 3000 portion was equally simple. Part
of the upload/installation procedure created a sample job stream. A few
modifications to suit our environment and voila! Ooops. I forgot to copy a
database into the new account created for ODBC testing and we don’t
allow updates across account boundaries. With that problem resolved, I was
all set to turn my attention back to the PC.
One of the tools with the Minisoft ODBC
product is called "Schema Editor." This isn’t a PC version of
DBSCHEMA.PUB.SYS but, rather, a tool to allow a database analyst to
further describe data elements and ensure these elements are properly
represented in the PC world of applications. Quirky things like decimal
alignment and date field ordering are handled. The completed
"Schema" provides a rich explanation of the database, dataset,
and data items. It is stored on the HP 3000 to be utilized by all ODBC
The next task at hand is to establish what
Microsoft calls a Data Source. The name given to the Data Source will be
the name used by PC applications to access the data on the HP 3000 via
ODBC. In defining the Data Source I used the Minisoft - HP 3000 ODBC
driver and provided it with details such as: server name, listener port
number, database name, schema file location, dbopen mode, userid and
password for connection, and Omnidex enhancement.
Now, whenever an initial call to this Data
Source is made, the HP 3000 ODBC driver on the PC connects to the listener
job on the 3000, provides the pertinent details and begins a session with
the ODBC server software on the 3000. See Figures 1 and 2 for a snapshot
of defining a Data Source under Microsoft Windows NT.
This is where it all came to roost. My first
PC package to access TurboImage data on the HP 3000 via ODBC was Microsoft
Access. I simply told Access to create an empty database, then Link in a
Table with "Files of Type" ODBC. Up popped a selection list
which included the Data Source I had defined earlier. A quick click and I
was presented with a list of all the datasets within my test database.
Another click on the dataset I wanted and in a few minutes I was presented
with a Microsoft Access table view of my HP 3000 data. The ease of doing
this set me back a little, and a thought about job security crept into the
back of my mind.
I started grabbing any ODBC compliant PC
tool within reach. In a couple of days I had query screens written in
Microsoft Access, reports written in Seagate Crystal Reports, data
input/search screens written in Microsoft Visual Basic 5.0, and another
version written in Powersoft’s Power++ (C++).
This euphoria hadn’t been part of my life
since I first unwrapped the mysteries of the SMTP protocol. Yes, I need a
life. With this one method of connecting to the database I now had a huge
array of tools that could be applied to existing business problems without
changing the data server or disrupting existing applications.
Then it broke. It soon became apparent that
the devil was not in the development but in the deployment of the
applications and the security of the data.
As with any technology, ODBC solves some
problems and creates some new issues. For instance, the task of installing
the ODBC driver on the PC and properly defining the Data Source is likely
beyond the ability of most of your users. This isn’t much of a problem
if you have 10 users in a single department who will be using ODBC. It’s
altogether a different matter if you have 3,000 users spread across the
Another problem of deployment is revision
control—ensuring that all your users have the latest copy of the
application can be difficult. Do you rely on your users to download the
latest software upon notification or do you implement an automated system?
One of the knocks against ODBC is that it
puts a strain on network bandwidth. I mentioned earlier that my first ODBC
connection took a few minutes to produce a result. I later discovered the
delay was due to an Access request that all records from the dataset be
sent. With a little digging I discovered there were ways to tell the
applications to pull down only a few pages of data at a time. But it makes
a very good point: poorly designed ODBC applications can seriously impact
a network and kill a project.
User security and data integrity also
present problems. Our environment has kept all user security at the
application level and kept users away from the operating system. We must
now either revisit that strategy or find a tool which will work with ODBC
to provide the security we need.
End user reporting tools can pose a data
integrity problem. Not from the manipulation of data on the server but
from misinterpretation of the data. For example, someone in the sales
department uses Microsoft Access to create a report that will be used to
project next year's sales figures. If this person does not fully
understand the various relationships of the data in the database he has a
high likelihood of producing an inaccurate report. User training, good
database documentation, and data views can help eliminate this type of
None of these problems is insurmountable.
They each require careful planning and evaluation and a well-developed
And what about Java? Does it have a place
here? Definitely, as does C++, Visual Basic, and other technologies.
Deployment of applications developed under different environments will
have varying characteristics. Visual Basic apps are relatively easy to
develop but are large when packaged for distribution. A C++ program can be
put into a single EXE file and distributed. Java applets/applications are
centrally stored and help to alleviate the revision control problem.
ODBC applications are going to require that
project managers have an intimate knowledge of the various PC application
development tools, a thorough understanding of the back-end database
technology, and good grasp of network topologies and capacities. (I began
feeling better about the longevity of my career.)
Each application and each environment will
need to be assessed properly to get a good result from an ODBC application
project. Factors such as geographical location of users, LAN vs WAN, and
knowledge of users' PC computing platforms will all play a role in
determining the best technology for your application.
Like so many other things in our profession,
ODBC isn’t a cure-all. But it sure is a nice "club" to have in
the bag. Fortunately ODBC for the HP 3000 is available in a number of
flavors including the one supplied by Hewlett-Packard for MPE/IX 5.5 pp 3.
For up-to-date information on HP 3000 ODBC
Jim Alton is System Analyst with Mohawk
College and a special consultant with Fioravanti-Redwood, Inc. He has
worked in the HP 3000 environment for 13 years and has significant
experience with the Internet.