Linux Shared Memory and Your MultiValue Database
These days if you are running a MultiValue database, most of your users and executives expect instant access to their data. In the good old days (sometime before I was even doing this technology thing), there was the "Data Processing" department. The Data Processing department took care of the database, often developed new applications, and ran the reports for the users and executives. Today, the users want the ability to run their own reports and "drill down" on the data to be able find what they are looking for easily and quickly. If you find yourself the only MultiValue developer in your company, without a lot of funding, you may be thinking, "Yeah, Yeah, my company will never buy any tool to do that anyway, so what's the point? Plus, I haven't got the time to figure out how to write that kind of a tool."
If you find yourself reading this and remembering the good old days, take heart, because you aren't an old dog. You're just experienced, and you have a lot you can still teach to the new dogs, while learning some new tricks.
Age and history aside though, in the present, let's take a step back for a moment, and consider fundamentally, "What do I need to get my users quicker, instant access to my data?"
At a minimum you need the following:
- Take input from the user and pass some information to a MultiValue subroutine.
- Make that MultiValue subroutine give some information back.
- Display that information in a meaningful way to the user
Yes, the process above is over simplifying the technology you might use underneath to do this, but in reality, this is all you need to talk to the web, the mobile phone, or some other device a user may be using in the year 2012 and beyond.
In its simplest form, the method of passing a MultiValue subroutine some user input and making that subroutine write data out is often done with simple operating system files. If you have ever written a Basic program to export data into an Excel Spreadsheet, you have already done this. If you have been reading Kevin King's series of articles discussing connecting your MultiValue database to the web using the PHP language and the Apache web server, you will notice Kevin is doing much the same thing. With a web server, running an Apache setup alongside your MultiValue database server, you can serve up data on web enabled devices and give users access to it in real time, anywhere they have an Internet connection. Again this is nothing new to some of you.
This business of calling a Basic subroutine with some user input to write data out is very powerful and with enough thought can do some really elegant things besides just sending the user a text file.
However, there is one small downside to doing this on a large system. The naysayers will quickly point the finger and say, "If I have a 1,000 users all calling subroutines on my system and writing data out that is an awful lot of extra disk I/O. Plus, what about connection pooling? I need those responses to be fast and efficient! Why make a session log on and log off every time a request is made? That is really inefficient."
Well, Mr. Naysayer, if you are still reading, and your MultiValue database server happens to be running any recent flavor of Linux, then I have a proposition for you. The proposition is, use shared memory instead of the physical disk to be your transport layer for the data. Yes, this method may not have connection pooling and some other nice features, but, because you are using the system RAM, it will still be really fast (despite the inefficiencies). So let's take a look at how to take advantage of Linux shared memory.
By default, included in any Linux operating system with a fairly recent 2.6.x kernel, there is an implementation of shared memory that is very easy to use. The best part is, you don't have to know any other programming language, other than MultiValue Basic to use it.
Now for the fun part. Get yourself to the Linux shell (preferably with a vt or some supported emulation) and type the following:
df -h
Look for a line that looks similar to this:
tmpfs 16G 0 16G 0% /dev/shm
Some of you who know a bit about Linux, are saying, "Wait a minute, you just asked me to display the amount of free disk space on my system?" Yes, that is true, but, what you may be unaware of is that the little line that says tmpfs, is a filesystem mounted in shared memory! This means the mount point /dev/shm is not using the disk!
This is created by default on any Linux install and is mounted every time the machine boots. The maximum space that /dev/shm can grow to is half the system RAM. Even if that space is exceeded, fret not,. It will start using the swap space on the physical disk.
If you are still a skeptic about this being a real filesystem, try opening your favorite Linux shell editor and saving a text file there.
You might do the following:
nano -w /dev/shm/thisiscool.txt
Type in some test text, hit Ctrl+X, and answer "yes" to save the file. Now enter:
ls -l /dev/shm
You should see your file there.
You might also be thinking, "Wait a minute, this looks an awful lot like a RAM disk. You mean this thing is taking up system RAM whether I use it or not?" The answer is no, /dev/shm is not a RAM disk in the traditional sense. It is shared memory, which means it looks like an old school RAM disk. It even smells like one. But it only uses RAM when you write something to it. When you delete something from it, it frees up the RAM.
Here is the best part. You can create an F-pointer in any of your accounts to /dev/shm and create items in your new shared memory file in your BASIC programs or your favorite Multivalue editor.
On OpenQM that F-pointer will look like this:
0001 F 0002 /dev/shm
On Universe the F-pointer is very similar:
0001 F 0002 /dev/shm 0003 D_VOC
Now before you go off creating F-pointers all over the place in your accounts, lets discuss a few caveats. First, there is security. You will notice the permissions on /dev/shm are pretty wide open, much like /tmp. So, one consideration is that you may want to mount /dev/shm with a little stricter permissions. Be careful in doing so, because you may have software on your Linux system that needs to write data there. Try doing this on a test system if you can before doing it on any live system. Be careful who has access to this new MultiValue file, and think about who can, and who cannot write to it. You don't want some average user dumping data there and filling up system RAM.
Second, understand that this should never be used to store any data that you want to keep for any real length of time. All data in /dev/shm is completely wiped out when you reboot the system. So don't start storing transaction history, sales data, demand history, and the like in /dev/shm, because on your next reboot, it will be gone in the twinkling of an eye.
The other nice thing about this method is it applies really well in the cloud where disk i/o is often very limited. Imagine if you could rent a $20 a month virtual Linux server in the cloud, install your MultiValue database, build your applications, and give users fast access to their data. Sure, a $20 a month server may not serve 1,000 concurrent users, but you might be surprised how far you can make it go with a few tweaks like this shared memory technique.
And last, I'm sure there are other uses for Linux shared memory in your MultiValue system besides using it as a transport layer for getting data to the web. Just give it some thought. I've met so many bright people in our community over the years, I bet with a little creativity and thought you folks can think of more. So what are you waiting for? Go get your hands dirty and silence the complaints of, "I can't get the data I need!"