tag:blogger.com,1999:blog-156408902024-03-13T16:09:27.093+00:00Andrew Marlow's Web LogAndrew's random technical ramblings.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.comBlogger92125tag:blogger.com,1999:blog-15640890.post-5463381508070167912024-02-18T14:58:00.003+00:002024-02-18T14:59:41.036+00:00Jenkins, git and ssh in a corporate environmentI reinstalled a later version of jenkins in order to dodge a CVE and found that git clone would no longer work. The terminal that started jenkins was getting messages prompting for the git ssh passphrase. The jenkins job just sat there on the git clone command without making any progress. I puzzled over this for ages. The previous version of jenkins had been working fine. I restarted the ssh agent but it had no effect. I googled to find out how to change my ssh credentials such that I had no passphrase (ill-advised though that may sound) and found articles claiming it was impossible. Well, it turns out it is possible. I did it and the jenkins problems went away. I don't like having an empty passphrase, it seems like bad practise, but hey, ho, needs must. So here's how I reset the passphrase to be empty: the ssh-keygen -p command prompts for the current passphrase. Enter it, then when ity asks for the new one (and confirmation) just hit return. Job done.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-10509710775809805352023-06-23T12:06:00.000+00:002023-06-23T12:06:03.984+00:00How to display markdown files from the linux command lineIt took quite while to track down how to do this. When you google for it you find GUI commands but not much for the command line. There are several tools but I have chosen one that works with what is available via the standard Red Hat repo for RHEL8. I use it even though my own machine is running mint 20.1. Going for something that is easy to install on RHEL8 means there is more of a chance that it will work in a corporate environment.
The command is called mdo and it is written in python. It can be pip'd into your virtual python environment. It requires prior installation of another component called rich, which can also be pip'd in. This is the great attraction of utilities written in python. They can be pip'd into your virtual environment and thus do not require root access to make them available. These components are on github at <a href="https://github.com/eyalev/mdo">https://github.com/eyalev/mdo</a> and <a href="https://github.com/Textualize/rich">https://github.com/Textualize/rich</a> .Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-58523914647889087832022-08-29T16:32:00.003+00:002022-08-29T16:32:47.688+00:00Many forks on github projectsWhen a project is not updated very often or goes by for years with no official updates, forks can proliferate. Then people who arrive at the site may want to know which forks are active. Luckily, there is a github project for solving this problem! It is called ActiveForks. If you go to <a href="https://techgaun.github.io/active-forks/index.html">https://techgaun.github.io/active-forks/index.html</a> you can enter the name of the github project and you will get a table of results, with the ability to sort on any of the presented columns.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-90460325251247922862022-08-01T12:21:00.001+00:002022-08-29T16:29:11.405+00:00Windows and directories that cannot be (easily) deletedIf a directory contains nodes whose full pathname is greater than around 255 characters then Windows has tremendous difficulty deleting such a directory. But luckily, there is an easy way out. The 7-Zip command comes with an additional executable, 7zFM.exe which is the 7-ZIP File Manager. I recommend you put an icon for this on your desktop. It works a bit like a file explorer with one significant difference. If you click on a directory and enter shift-delete then it will delete that directory even if other commands fail due to the 255 problem. Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-18064658621686042652022-03-27T14:54:00.003+00:002022-08-01T12:17:09.320+00:00Function parameters that are fundamental types passed by value and constThe rule is to not do this in the header file.
Some people say don't do it in the cpp file either (I am in that camp)
but this does seem to be a matter of opinion.
See the abseil article <a href="https://abseil.io/tips/109">https://abseil.io/tips/109</a> for a discussion.
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-73463567070010836572022-03-27T14:33:00.001+00:002022-03-27T14:33:33.919+00:00Windows, X11, cygwin, fonts and XmingFor years I used the X11 server that is part of cygwin. It seemed to be a bit flakey but there didn't seem to be anything better.
Every now and then I would run into a problem where it would seem to work but xterm would complain about missing fonts.
So, I downloaded and installed xming-fonts (from https://sourceforge.net/projects/xming/files/Xming-fonts/7.7.0.10/Xming-fonts-7-7-0-10-setup.exe/download) on my local node (not the node that was running xterm) and that fixed the error. These days I no longer use the cygwin X11. I use XMing: see http://www.straightrunning.com/XmingNotes.
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-35863924216075313392021-05-31T17:10:00.004+00:002021-05-31T18:16:59.494+00:00Software Development links and comments<h2>Intro</h2>
<p></p>
I am in the process of decommissioning my website and moving my notes on software development and suchlike to my blog here.
<p></p>
<h2>ACCU</h2>
I am an active member of <a href="https://www.accu.org">ACCU (the Association of C and C++ Users)</a>.
It's been a long time since I had anything published by them. There are a couple of articles a few book reviews.
<p></p>
<h2>C++ Coding Guidelines</h2>
<p></p>
Many years ago I started to write a book on this. It was never published. I did discuss an early draft with Addison Wesley but they did not show any interest. I discussed this with some ACCU people and the theory put forward was that maybe they had been approached by other authors on the same subject. About a year later Sutter and Alexandrescu had their guidelines published. Their book is very good and I recommend it. Their book is much better than what I was working on.
<p></p>
In a corporate environment I would never bother with a coding guidelines document these days. They are never read, never enforced, and can become out of date very quickly. They are also a rich source of arguments and ill-feeling. There has to be a better way. There is. It is called clang. I would have a jenkins job to use clang-format to format the code. That would take care of all whitespace and brace arguments. And I would use clang-tidy static code analysis (SCA) to find the more serious coding issues. There would be a jenkins job to ensure that the code was always SCA-clean. clang-tidy is not the easiest program to run since it needs to know what compiler options are used and that includes macros and the places where to look for include files. I have found that it helps to write a python script to take care of these things. It is worth the effort.
<p></p>
<h2>Sourceforge</h2>
<p></p>
Here are my own projects, hosted on SourceForge. They are old and have fallen into disuse really. If I was going to maintain them I would probably start by relocating them to github.
<ul>
<li><a href="https://sourceforge.net/projects/laum">LAUM</a> - Development has stalled. I hoped it would eventually it will be a suite of applications to help in the administration of groups of machines. The whole thing has been made a bit obsolete by docker and kubernetes.</li>
<li><a href="https://sourceforge.net/projects/fructose">FRUCTOSE</a> - wrote an LGPL'd C++ unit test framework. The main motivation was a simple, header-only framework that does not depend on boost. However, these days I recommend that people go with the Google unit test framework (gtest).</li>
<li><a href="https://sourceforge.net/projects/cycliclogs">Cyclic Logs</a> - wrote a GPL'd package to provide cyclic logfiles. I think this does still have a practical use in environments where the disk space is constrained.</li>
<li><a href="https://sourceforge.net/projects/depdot">DepDot</a> - wrote a GPL'd command (perl script) to show cyclic dependencies among libraries.</li>
</ul>
<p></p>
<h2>TeX</h2>
<p></p>
I am a keen user of TeX, via the LaTeX variant created by Leslie Lamport. I have been a member of the UK branch of the Tex Users Group for several years. I tend to produce most of my documentation using LaTeX. This allows me to produce PDF and postscript files (via DVI conversion programs) and RTF files (via latex2rtf). The RTF format is an open format but due to its close integration with Microsoft Word for Windows it is useful for people that require documents to be in a Microsoft format. I used to use latex2html to create web pages from my LaTex documents, but have now found that <a href="http://hevea.inria.fr">HeVeA</a> does a better job and is much faster. It is written in oCamL.
For many years I experimented with alternatives to using LaTeX directly, flirting briefly with DocBook, and other approaches. I now conclude that there is just no substitute for writing in LaTeX directly.
<p></p>
<h2>CORBA</h2>
<p></p>
I feel great nostalgia when I think of CORBA. I liked it for a very long time. I was interested in CORBA right from the beginning (i.e. when the standard was so embryonic, CORBA would not even interoperate with itself!). Despite the complexity of the standard, I still think CORBA had a lot to offer. I have used several ORBs, some open source, some proprietary. My favourite used to be MICO but unfortunately the support for multithreading is still not finished and development petered out around 2017, so TAO (the ACE ORB) is now the winner. I have also looked at JacORB by Gerald Brose. The best proprietary ORB (IMO) was Orbix from IONA (now owned by Progress).
<p></p>
For those interested in CORBA I recommend heading over to the web site of <a href="http://ciaranmchale.com/index.html">Ciaran McHale (</a>, a former IONA consultant whom I have worked with before. He has a free book there which I think provides a great practical introduction to programming with CORBA.
<p></p>
However, despite the nostalgia I have to admit that CORBA has had its day. The Rise and Fall are well documented by Michi Henning, see <a href="https://cacm.acm.org/magazines/2008/8/5336-the-rise-and-fall-of-corba/fulltext">https://cacm.acm.org/magazines/2008/8/5336-the-rise-and-fall-of-corba/fulltext</a>. Unfortunately there does not seem to be anything trying to replace it, except possibly ICE from <a href="https://zeroc.com/products/ice">ZeroC</a>. It is Open Source, which is obviously a good thing, but be advised that the the license is GPL and so does not permit use in proprietary products (a separate license agreement is available with a purchase cost). If I was ever asked to work on a project where there was a need for some kind of service interface I would probably make it a web interface. That's the current fashion at the time of writing (2021) and there are umpteen frameworks. I would probably choose gRPC with Web Assembly. I would never use SOAP and I would be wary of REST.
<p></p>
<h2>Free Software and Open Source</h2>
<p></p>
Projects that I have contributed to include:
<ul>
<li><a href="https://www.copperspice.com/documentation-doxypress.html">DoxyPress</a></li>
<li><a href="https://pocoproject.org/">PoCo</a></li>
<li><a href="http://www.dre.vanderbilt.edu/~schmidt/ACE.html">ACE</a></li>
<li><a href="https://www.openssl.org/">OpenSSL</a></li>
<li>I did some work on ESNACC, an extended version of SNACC, an old ASN.1 compiler. ESNACC started because SNACC was an old orphaned project with no support for either C++ or DER and PER (SNACC was old BER only). Sadly, work on ESNACC gradually fizzled out.</li>
</ul>
<p></p>
I have been an associate member of the <a href="https://www.fsf.org"/>Free Software Foundation</a> for many years.
<p></p>
I admit that I am not consistent when it comes to the ideals of the Free Software Foundation. I agree with the FSF in the same way that I agree with vegans. I know that unless one is a vegan one is supporting the animal food industry, which is full of cruelty and suffering. But I just can't go vegetarian, let alone vegan. I won't go into the reasons here. I know that I am supporting animal cruelty and I am not happy about it, but it is not going to change any time soon. In a similar way, despite the good things I find in the FSF, I am, unfortunately, supporting the proprietary software industry. My job involves the development of proprietary software and this has been the case my entire working life. That is not going to change (i.e. I am not going to have a change of career). I find the best I can do is to promote open source in the workplace. I know this is a rather feeble thing. After all, we know that Free Software and Open Source are different movements with different goals. But in my opinion the software industry as a whole will never understand the importance of Free Software. They are beginning to understand Open Source and that's better than nothing.
<p></p>
<h2>ASN.1</h2>
<p></p>
I really like ASN.1. I was first introduced to it way back in 1984 when the encoding standard was called .X409. It was used on Prime Computers for some of its client/server software and proved to be a boon when the protocol had to change, due to the use of sets and version numbers. Sadly, I have not seen it used much since, except of course in a few standard internet protocols.
<p></p>
I found out there is effectively a replacement for ESNACC, asn1c, which seems to be significantly better than either SNACC or ESNACC.
I haven't played with it yet. I wonder if I ever will.
<p></p>
There is a <a href="https://www.oss.com/asn1/resources/books-whitepapers-pubs/asn1-books.html#larmouth">useful book on ASN.1</a> that you might find interesting.
<p></p>
<h2>Heroes of software</h2>
<p></p>
There are so many potential heroes for a computer geek to look up to, but my favourite is <a href="https://www.turing.org.uk/index.html">Alan Turing</a>. He is regarded by many as the father of computer science. He is particularly admired by many of us in the UK for his work at <a href="https://www.bletchleypark.org.uk/content/museum.rhtm">Bletchley Park</a>. Turing's work there was part of the outstanding effort in decrypting German messages during the Second World War.
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-30231424295897826062020-10-22T16:06:00.004+00:002020-10-22T16:06:51.322+00:00Java has finally got strong crypto<p>For a long time now America has treated strong crypto as akin to munitions; a deadly weapon that must not be allowed to fall into the wrong hands. For the background to this, see the wikipedia page at <a href="https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States">https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States</a></p>
<p>The wikipedia page indicates that this attitude was significantly lessened in 1992 but the sad fact is that is persisted well beyond that for java. The Oracle release notes for JDK8 at <a href="https://www.oracle.com/java/technologies/javase/8all-relnotes.html">https://www.oracle.com/java/technologies/javase/8all-relnotes.html</a> say that the restricton was removed in January 2018, in update 161. The change was also backported to JDK7 in update 171.</p>
<p>This means that java projects using JDK8 had better move to at least this update version if they have not already. Of course, users of OpenJDK probably never had a problem and certainly don't now.</p>
<p>The way I ran into this problem was during work on a trade feed that uses the FIX protocol. The FIX session was secured with TLS1.2. everything was fine until one day the remote side changed from a weak crypto algorithm to a strong one. Our side failed with a mysterious SSL handshake error. This came from the mina package, as used by quickfixj. Mina which doesn't seem to handle this situation well at all. We had to turn on packet level logging via the JVM option -Djavax.net.debug=all to see what
was happening. The log showed that the remote side wanted to use a strong algorithm but that many algorithms on our side were disabled. At the time the latest JDK8 update from Oracle was update 251. I switched to that and then all those messages about unknown algorithms disappeared and the algorithm preferred by the remote side was accepted. Everything started working again.</p>
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-64949091658518613112020-09-05T14:19:00.001+00:002020-09-05T14:19:30.412+00:00Windows password change through different levels of RDPThere is conflicting and incomplete information on how to change your Windows password where RDP is involved. It turns out that the thing to do changes depending on how many levels of RDP are involved. Here's what I found:
<ul>
<li>No levels of RDP. This is the simple case. Just Ctrl-Alt-Del then click on change password.</li>
<li>One level of RDP. Just Ctrl-Alt-END (that's END, not Del) then click on change password.</li>
<li>Two levels of RDP. You need to send Ctrl-Alt-Del to the machine at the end of the RDP chain but typing that will do it on the top level machine. Ctrl-Alt-END will do it to the machine at the second level of RDP. So you have to use the OSK command on the target machine to get an On Screen Keyboard. Then type Ctrl-Alt and click on the Del button on the OSK display.</li>
</ul>
If you google to find out how to solve this problem the most common reply is Ctrl-Alt-END. Here people are assuming there is only one level of RDP. It is very annoying that what you have to do depends on how many levels of RDP there are.
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-7827611517968742402020-08-31T12:01:00.001+00:002020-08-31T12:01:49.919+00:00Case-insensitive ext4: just say no!I am dismayed to learn that ext4 was changed in linux kernel 5.2 to be case insensitive (strictly speaking, to allow it as an option). This is truly terrible and will come back to bite us all. See <a href="https://lore.kernel.org/linux-ext4/CAHk-=wg2JvjXfdZ8K5Tv3vm6+bKRedotF5cr5AwVZVBypVfdAQ@mail.gmail.com/">this kernel.org posting</a> for details. But here are just a few thoughts:
Is it really going to be case insensitive? I doubt it. There are some environments in which there is a requirement for filenames to contain both uppercase and lowercase characters. Java springs to mind where the filename maps directly to the class name. Of course one could start to code entirely in lowercase but what about those classes that have already been written? What happens when the source is moved to an ext4 partition that has this feature? I strongly suspect that when they say case-insensitive what they actually mean is case-preserving, like MS-Windows. The fact that this has not been called out shows that the functionality has not been considered very deeply.
People have two confused two unrelated issues: the issue of filenames supporting case and the issue of applications making the case of filenames irrelevant or not. If the filesystem is case-preserving then applications will still need to cope with this by being case-blind where they think this is what the user wants. Putting this into the file system itself is completely wrong. IMAO.
There are several changes being made to linux which I don't like and this is another in a growing list:
<ul>
<li>systemd. I notice now that more and more linux software that is available through a distro's package management system is dependent (transitively) on systemd. I anticipate a day where practically every package has this dependency.</li>
<li>The Out Of Memory (OOM) Killer. I've already blogged about this.</li>
<li>btfs not supporting datetime last accessed.</li>
</ul>Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-53141534360551284932019-10-26T19:19:00.001+00:002019-10-26T19:19:13.505+00:00Building open source C++ libraries on Windows for 32 bit and 64 bitIn my experience, most open source C/C++ library projects don't do a good job of providing the ability to build the library in all four builds, i.e. all combinations of release mode and debug mode with 32 bits and 64 bits. These days it is usually just 64 bit and sometimes it's just 64 bit release. To get all four build modes one has to start hacking but there is a little gotcha that nobbles me every now and then, so I thought I would blog about the solution so I never have to strain my brain to remember it in future. I can just go to my blog.<br />
<br />
Add a new configuration from the configuration manager. Pick Win32 from the pick list and say you want to inherit from the 64 bit configuration (that's so you get all that the 64 bit configuration has). This configuration will claim to be 32 bit but there will be a problem. The linker will be set for 32 bit but the compilation will be in 64 bit. This is not apparent from the settings dialog. So when you build you will see an error like:<br />
<br />
<pre>fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'
</pre><br />
To fix this, edit the Visual Studio project file, removing this line from the 32 bit sections:<br />
<br />
<pre><AdditionalOptions>%<AdditionalOptions> /machine:x64</AdditionalOptions>
</pre><br />
That's it!<br />
<br />
<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-67218216998516541592019-03-16T15:24:00.001+00:002021-06-03T05:38:15.763+00:00I have converted at long last to 1TBSAfter decades of firm adherence to the Allman brace style I have finally changed my mind. I am now in the 1TBS camp.<br />
Here is my reasoning, it is all a matter of my own personal opinion of course. The stuff below is not trying to make a
logical reasoned argument for 1TBS in general, just why I changed my mind.<br />
<br />
IMO Allman is useful to show where code blocks begin and end in legacy code where functions ramble on and on as they grow uncontrolled and undisciplined over the years. Such source often contains a random mixture of tabs and spaces.<br />
It is quite hard to see where the scope blocks are under these conditions. Reformatting Allman style makes such code clearer than it would otherwise be. Using an Allman style on new code allows it to grow in an uncontrolled way where blocks just get bigger and bigger, rather than being refactored. When this happens the use of Allman means the blocks that would otherwise start to become much harder to see stay reasonably visible.<br />
<br />
So that's why I used to think Allman was good. It allowed blocks to be more easily seen in ancient crufty code and it allowed new code to become crufty while still preserving some ability to see the blocks. But I have now decided that this is not a good reason to prefer the Allman style.<br />
<br />
We all know that advocates of 1TB say it makes the code shorter, and it does. And shorter code has become much more fashionable over recent decades. The shorter the code the smaller the blocks and once a block becomes only a few lines the Allman style makes such blocks unnecessarily longer. I see this in Java code where 1TBS is the dominant layout style and shorter functions are practised much more than in C++. Maybe it's because alot of C++ code is ancient and in the dim and distance past it was more normal to write long, rambling unfactored functions. Java hasn't been around long enough for such cruft to accumulate to the same degree. Plus it came along later during which time shorter functions became more fashionable. When was the last time you saw a Java function that rambled on for hundreds or even thousands of lines. I wouldn't be surprised if you've never seen one. But we've all seen it in C and C++.<br />
<br />
So, if I ever get the luxury of working on a new C++ project and I get any say in things like layout style, I would advocate 1TBS. The project would have short functions and the minute a function looks like growing to the point where the blocks start to become less visible it would have to be refactored. This is not just a matter of making it look pretty. The argument for refactoring would be that the result would be easier to test and it would be easier to reason about code coverage, as well as being easier to understand. This is already standard practise in Java, thank goodness.<br />
It ought to be standard practise in any programming language.<br />
<br />
During my conversion to 1TBS I have been working with python. This is a language where the issue has been designed away. How jolly sensible. Why don't all new languages learn from this? Python seems to be the only one. Every time some new language comes out it is inevitably based in C++ regarding layout. Java copied this and so has almost every other language since.<br />
<br />
There is another factor which led to my 1TBS conversion: I have been working on a project where Allman is the standard, but with a twist. Single statements must not be surrounded with braces. We all know how potentially dangerous that is. It can lead to the dangling else problem, and has done on that very project, a fact revealed by a clang-tidy analysis.<br />
It can also cause a problem when a developer changes the code to make the block more than one statement. These issues just don't arise when one uses 1TBS.<br />
<br />
So, how is this change in belief going to affect my programming life? Hardly at all, unless I write stuff on my own (e.g. updating my sourceforge projects or creating new ones). After all, I am in an environment in which<br />
1TBS is forbidden and its use would probably harm the code base as it would make the blocks in long rambling functions even harder to understand than they already are. I can't even use 1TBS in the java code, since Allman is mandated there as well.<br />
<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-52419507846575256452019-01-06T15:09:00.000+00:002019-01-06T15:10:04.612+00:00A great C++ blog I've foundI've found <a href="https://www.bfilipek.com/">a great C++ blog </a>and I thought I just had to mention here it. There are lots of goodies about C++17 and C++20 and it keeps track of well known players in the industry and what they are up to, e.g. people like Barnje Stroustrup, Herb Sutter, Nicolai Josuttis and John Lakos. The blog mentions high profile features and plans for C++ including things like the inclusion of Howard Hinnant's date library into the standard and the adoption of contracts. I encourage everyone to take a look.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-6507144335326220762018-12-29T10:53:00.001+00:002022-03-27T14:45:04.630+00:00The insane OOM (out of memory) KillerIn the late nineties I worked on AIX for the first time. Back in those days there were several flavours of Unix available, all with their differences and idiosyncrasies. Linux was a fledging and fitted on just one CD. I came across a feature of AIX which I thought was crazy - the OOM (out of memory) killer. In this variant of Unix malloc always succeeded, even when there wasn't enough memory. The idea was that malloc returned a pointer to heap memory but wouldn't actually start to use it until the first reference was made. At the point at which it did then memory had jolly well better be available. If it was then all well and good. If not then the OOM killer came into play. The OOM killer would choose a victim process and kill it. The result was that memory would be freed and the access occurring at the time would succeed. Sounds insane, right? Right. I laughed and thought that this one feature rendered AIX useless compared to the other Unixes and would lead to its demise. How wrong I was. Fast forward a few years later. It was added to Solaris. Sigh. Fast forward to today. It has been added to Linux.<br />
<br />
The OOM killer is a kernel development that mirrors what happens when banks try to innovate. It's what I call "the conspiracy of crappiness". It goes like this: some group or other tries to innovate but comes up with a really bad idea that doesn't work well and everyone hates it. The competition discover the move and for some inexplicable reason they copy it. Now everyone hates the competition as well and none of the players can be distinguished in this area. Bank charges on current accounts is an example. So is charging for withdrawals at ATMs (although customers have objected so vehemently to that one that there has been some back peddling). Well, in the world of Unix we now have the OOM killer.<br />
<br />
There's a good article at <a href="https://lwn.net/Articles/360439">LWN</a>, that explains why this is insane. There's <a href="https://lwn.net/Articles/317814/">another article</a> that gives tips on how to mitigate the nastiness, but surely that it yet another testimony to the fact that it is nasty. I also came across <a href="https://lwn.net/Articles/104179/">this article</a> that discusses the nastiness and has an excerpt of <a href="https://lwn.net/Articles/104185/">.an amusing article</a> that discusses the fairness, or otherwise, of how the victim is chosen. Here is the excerpt:<br />
<br />
<p><i><br />
An aircraft company discovered that it was cheaper to fly its planes with less fuel on board. The planes would be lighter and use less fuel and money was saved. On rare occasions however the amount of fuel was insufficient, and the plane would crash. This problem was solved by the engineers of the company by the development of a special OOF (out-of-fuel) mechanism. In emergency cases a passenger was selected and thrown out of the plane. (When necessary, the procedure was repeated.) A large body of theory was developed and many publications were devoted to the problem of properly selecting the victim to be ejected. Should the victim be chosen at random? Or should one choose the heaviest person? Or the oldest? Should passengers pay in order not to be ejected, so that the victim would be the poorest on board? And if for example the heaviest person was chosen, should there be a special exception in case that was the pilot? Should first class passengers be exempted? Now that the OOF mechanism existed, it would be activated every now and then, and eject passengers even when there was no fuel shortage. The engineers are still studying precisely how this malfunction is caused. <br />
</i></p>
<p>Update: 27 March 2022</p>
Since that aircraft analogy I have found an article on the perils of overcommit which gives a more dispassionate assessment, but still concludes it is a terrible idea: <a href="https://www.etalabs.net/overcommit.html">https://www.etalabs.net/overcommit.html</a>Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-43208473592354490772018-12-02T16:43:00.000+00:002018-12-02T16:43:08.232+00:00I can't stand the JBoss Application ServerI wonder which application server people chose when working on Java projects that need to publish dynamic web pages. I have used tomcat in the past and found it to be pretty good. But for the last few years I have been in an environment where JBoss was chosen. JBoss comes with all sorts of enterprisey EE things such as a JMS implementation and whilst initially this may seem attractive I have decided that I don't like it. I would now recommend that any project that needs JMS and dynamic web pages avoids an enterprise application offer. Instead I think it is better to chose the web page and JMS solutions separately.<br />
<br />
Years ago I wrote a book review for ACCU on a JBoss tutorial book. I gave the book a bad review because it was largely XML fragments concerning JBoss configuration. But I now see that this is what struggling in a JBoss environment is all about. I still think the book was wrong to have such large XML sections though. The precise XML needed to make JBoss do what you want seems to wibble depending on the exact version of Jboss you have and also possibly on what colour socks you are wearing. But it gets worse. Recently (wrt the time of writing this, December 2018) JBoss went proprietary. Red Hat now calls it JBoss Enterprise Application Platform or JBoss-EAP for short. Not to be confused with the old open source version which was just called JBoss. In an attempt to deal with the confusion Red Hat renamed the old one to Wildfly and open source development is now done under that name. Wildfly does seem to be much better than JBoss but it's all relative; it is still derived from JBoss and so still suffers from the tremendous environmental difficulties caused by obscure and constantly changing XML configuration.<br />
<br />
<br />
So, for people who want JMS and web pages with dynamic content, I recommend ActiveMQ and Apache Tomcat respectively.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-55309485097013850392017-11-25T11:11:00.001+00:002017-11-25T11:12:37.818+00:00Veracrypt instead of TruecryptBack in June 2014 Truecrypt died, but I and many others were able to build it from the source. I blogged about this before. Recently I had to access a couple of truecrypted volumes but found that my copy of truecrypt no longer worked. It relied on an old version of GTK that was no longer on my system. After some fruitless attempts to restore the required version of GTK2 I decided to try out VeraCrypt, which is the successor to Truecrypt.<br />
<br />
Veracrypt is everything that Truecrypt was, and more. Fully open source, multi-platform strong encryption with optional plausible deniability and compatibility with Truecrypt. After installing yasm and libfuse I was able to build Veracrypt from source with no trouble at all. And it works. It was able to read my old truecrypt volumes. It also works on my Windows laptop. Wonderful! I have now switched over to Veracrypt.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-33171060007737140252017-10-19T20:47:00.002+00:002017-10-19T20:47:50.722+00:00How to find which dependent DLL can't be foundA couple of times in the last few years I have faced a knotty problem to do with DLLs on Windows. The first occasion was when a large complex program was performing that Win32 function ::LoadLibrary and it failed to find a sub-dependent DLL. A more recent case was where a java program called loadLibrary to load a shared C++ library used by a JNI interface. This library load also failed. Both failures were silent and mysterious. No details are given, just that the load failed.<br />
<br />
I googled for help and asked friends and colleagues. The answer that came back again and again was to use the Dependency Walker at http://www.dependencywalker.com. Well, it turns out that every time I used that program to solve the riddle it was no help at all. I have now found a more reliable way, thanks to a tip via the ACCU general mailing list. I wrote a little C++ program. Before you run the program set PATH to the value it would have in your particular problem situation. The program waits for the user to hit return, then it calls ::LoadLibrary on the library name supplied. What you have to do is run the program (with PATH set appropriately) and while it is waiting for you to hit return, run Procmon from SysInternals. Enter the pid for the LoadLibrary program and set a filter for Operation to QueryOpen. Then hit return so it tries to load the library. The Procmon windows will then fill with all the file access attempts made to resolve the DLLs.Bear in mind that it is using PATH to locate the DLLs so there will be several access failures. The thing to do is check each leaf DLL name and find the case or cases where it failed to find the DLL no matter which PATH directories were searched. That's it, you have found which DLL failure(s) occurred!<br />
<br />
It is a shame there is no more convenient way to deal with situation. If only the logic of calling ::LoadLibrary could be combined with the logic in Procmon that gets all the OpenQuery cases with the pathname and whether or not the access worked, all in one program. Maybe one day someone will write such a program, but in the meantime this solution will have to do. Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-17088942886255134142017-04-15T15:54:00.000+00:002017-04-15T15:54:53.288+00:00Linux Mint 17 and scroll bar arrowsA while ago I did a complete reinstall of my desktop machine using Linux Mint 17. One of the first things I noticed after doing this was that the arrow bars that normally appear in conjunction with the scroll bar had disappeared. I deemed this a minor irritation and didn't do anything about it. But more recently I investigated why this was and what to do about it and found various blogs etc where people were complaining of the same thing and offering various solutions. I only found one solution that actually worked and give details on it below:<br />
<br />
* Ensure that your changes are made to the Mint-X theme. You need access to the theme selector. Click on the Linux button (bottom left hand corner) and click on Settings. In the right hand menu pane click on Appearance (with the jacket and tie icon). This shows the theme selector when you pick the first tab, Style (which is the default tab). When you click on a theme it is immediately selected. There is no need to logout, reboot, or anything else. On selecting a theme the theme config files are read and processed. Therefore when you edit the theme files, use the selector to pick any theme *other* than Mint-X, then click on Mint-X again to pick up your changes.<br />
<br />
* As root, edit the theme files. These are found under /usr/share/themes, so for Mint-X the directory is /usr/share/themes/Mint-X. There are sub-directories for gtk-2.0 and gtk-3.0. My edits were done to gtk-2.0. The file there is called gtkrc. Make a backup copy of the file first. Ensure your file contains the following:<br />
<code><br />
GtkScrollbar::has-backward-stepper = 1<br />
GtkScrollbar::has-forward-stepper = 1<br />
GtkScrollbar::stepper_size = 18<br />
GtkScrollbar::min-slider-length = 30<br />
GtkScrollbar::slider-width = 18<br />
GtkScrollbar::trough-border = 1<br />
GtkScrollbar::activate-slider = 1<br />
</code><br />
The crucial line turns out to be:<br />
<code><br />
GtkScrollbar::stepper-size = 18<br />
</code><br />
Without that line, no scroll bar arrows.<br />
I found this tip on Linux Questions at <a href="http://www.linuxquestions.org/questions/linux-mint-84/question-how-to-enable-scrollbar-arrow-buttons-in-linux-mint-v17-3-a-4175580868/">http://www.linuxquestions.org/questions/linux-mint-84/question-how-to-enable-scrollbar-arrow-buttons-in-linux-mint-v17-3-a-4175580868/</a>.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-10083948713093724542017-02-26T14:53:00.000+00:002017-02-26T14:53:39.573+00:00Using Joda time to handle date+time+timezoneI have been working on a client-server system where the client and server are in different timezones. The client is on London time, the server is in Los Angeles, a difference of 8 hours. This means that at the end of the business day in LA it already the next day in London. The API that I have to use contains functions with parameters of type Calendar. As we know, Calendar is a date+time+timezone triple, and the timezone defaults. This means that if the timezone is not explicitly specified then it will change its meaning as it goes over the wire.<br />
<br />
I have been using the Joda datetime package to help me and after a bit of struggling eventually came up with the example program below which shows the construction of Joda DateTime objects for a specific date+time+timezone which is displayed correctly in both timezones. The program also shows how to construct Calendar objects from them for the correct timezone.<br />
<br />
<pre style="font-family: Andale Mono, Lucida Console, Monaco, fixed, monospace;
color: #000000; background-color: #eee;
font-size: 12px; border: 1px dashed #999999;
line-height: 14px; padding: 5px;
overflow: auto; width: 100%"> <code style="color:#000000;word-wrap:normal;">
package jodaexample;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;
/**
*
* @author marlowa
* This example shows a time of 18:30 PST which is 8 hours behind UTC.
* This means the date+time+timezone is 01:30 the previous day in UTC,
* or 02:30 in BST.
*/
public class example {
public static void main(String[] args) {
System.out.println("Joda timezone example program.");
DateTimeZone mytimezone = DateTimeZone.forID("America/Los_Angeles");
DateTime mydatetime = new DateTime(2017, 3, 31, 18, 30, 0, mytimezone);
String formatString = "yyyy-MM-dd HH:mm:ss z '('Z')'";
DateTimeFormatter dtf = DateTimeFormat.forPattern(formatString);
System.out.println("DateTime (local, i.e. behind UTC) = "+dtf.print(mydatetime));
Calendar datetimeInAmerica = mydatetime.toGregorianCalendar();
SimpleDateFormat sdfInAmerica = new SimpleDateFormat(formatString);
sdfInAmerica.setCalendar(datetimeInAmerica); // to set the timezone.
System.out.println("Calendar inAmerica = "+sdfInAmerica.format(datetimeInAmerica.getTime()));
long dateTimeMilliseconds = mydatetime.getMillis();
int millisecondsOffset = mytimezone.getOffset(dateTimeMilliseconds);
System.out.println(String.format("Milliseconds = %d, offset = %d", dateTimeMilliseconds, millisecondsOffset));
long millisecondsInTimezone = dateTimeMilliseconds+millisecondsOffset;
System.out.println("millisecondsInTimezone = "+millisecondsInTimezone);
long millisecondsInUTC = mytimezone.convertLocalToUTC(millisecondsInTimezone, false);
DateTime dateTimeUTC = new DateTime(millisecondsInUTC, DateTimeZone.UTC);
System.out.println("DateTime (UTC) = "+dtf.print(dateTimeUTC));
Calendar datetimeInLondon = dateTimeUTC.toGregorianCalendar();
SimpleDateFormat sdfInLondon = new SimpleDateFormat(formatString);
sdfInAmerica.setCalendar(datetimeInLondon); // to set the timezone.
System.out.println("Calendar inLondon = "+sdfInLondon.format(datetimeInAmerica.getTime()));
}
}
</code>
</pre><br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-63577900772209615832016-05-25T13:20:00.002+00:002016-05-25T13:20:34.178+00:00The cause of MIDL error MIDL2398Whilst working on a Windows Visual Studio (VS) project that uses MIDL (Microsoft IDL) I suddenly started getting the error MIDL2398 during the build. This was for no apparent reason. I tried the usual things, logging off, rebooting, cleaning the VS project, blowing my SVN checkout away and doing a fresh build. Sometimes this worked and sometimes it didn't. This was driving me nuts. Googling didn't yield much. Other people were also seeing this problem but no-one explained it and several people said that when they re-installed VS it went away. Yeah, right.<br />
<br />
After a few hours of these failures I noticed something. The problem seemed to happen roughly on the hour. Then I remembered. I had recently set up a jenkins job to run on the hour. The jenkins job was building something else in an unrelated area but the coincidence seemd too great to ignore. I disabled the job. It turns out that the jenkins job was failing and during the failure logging it tried to log to a logfile without using double quotes around the filename. Since it was a jenkins job with the default jenkins install directory, the pathname started with "C:\Program Files (x86)\Jenkins". This was causing the logfile "C:\Program" to be created. The jenkins job, being run by jenkins, had administrator privileges so it was allowed to write this file to the root directory. When my VS got to the COM bit where it runs MIDL I got the error.<br />
<br />
The fact is that the presence of the rogue file "C:\Program" kills MIDL with this weird error. Well I never. And it turns out to be easier to accidently create this rogue file than you might think.Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com3tag:blogger.com,1999:blog-15640890.post-8781281114480930722015-09-26T12:16:00.001+00:002015-09-26T12:17:17.616+00:00The pesky capslock and inserts keys on Windows computersIt is a complete mystery to me why computer keyboards even have a capslock key. Old timers like me have a theory that in the old fashioned days of typewriters there was a practical reason, but surely there is no reason for it now. See <a href=" http://capsoff.org/history">capsoff</a> for a history lesson. Nowadays it is just a key that you hit by accident. At home I use the <a href="https://en.wikipedia.org/wiki/Happy_Hacking_Keyboard">Happy Hacking</a> keyboard. That's right, I spent extra money to get a keyboard that doesn't have the capslock key!<br />
<br />
For several years I have using registry hacks to disable the capslock key. It is one of the first things I do when setting myself up on a new machine. But until recently I didn't know what to do about another pesky key: the insert key. That's the key that makes typing either insert or overwrite according to the current setting. The current setting is not displayed so you only find out if you have hit it by accident when you notice that the last few characters you typed overwrote instead of inserting. Unlike the numlock key, the insert key has no feedback to tell you its current state. So this is another key you might want to disable. I couldn't find a registry hack for that but recently I found a great program for Windows called <a href="https://sharpkeys.codeplex.com/">SharpKeys</a>. This does allow you to turn off the insert key and, of course, the capslock key. So I now resolve to use SharpKeys whereever I go from now on. I hope you find it useful too.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-50926878732709216602015-08-01T13:05:00.003+00:002017-04-15T15:58:08.224+00:00The death of PurifyPurify is a memory debugger program used by software developers to detect memory access errors in programs, especially those written in C or C++. It was originally written by Pure Software.<br />
<br />
My first experience of purify was way back in the days Motif programming around 1992. I used it to track down memory corruption and leakage bugs in my code for a complex oil and gas graphics program. After I had fixed my bugs I found that purify complained about loads of bugs in Motif. Over the next few years Motif got cleaned up dramatically, thanks in no small part to purify. I have been a keen user ever since and as time went on it was ported from Solaris to other flavours of UNIX and to Windows. A GUI was added, better support for multi-threading, it just got better and better.<br />
<br />
Why was purify so good? Because at the time there was little else you could use that would do the same job in a completely comprehensive way. The other tools typically required access to the entire source of your product as recompilation was necessary. Other approaches included interposing special versions of new/delete and malloc/free which required special linking as sometimes special compilation as well. I saw one attempt at using a virtual machine, from IBM, but IMO it was a failure. I broke it with a simple 3 line program almost immediately. So I was very skeptical in the early days that emulation would ever work. Boy, was I wrong when it comes to valgrind. But valgrind wasn't around then. Remember, we are talking about how to debug legacy C++ that was written before valgrind was invented or linux was popular.<br />
<br />
Pure Software acquired by Atria but the product continued to be good at that point and spread mainly by word of mouth. There were fully functional but time-limited trial versions. I used to say it was the next tool you should get right after the C++ compiler. But then it was acquired by Rational where it stayed for many years. It languished under the ownership of Rational who didn't seem particularly keen to sell it. One had to jump through hoops when one had finally won the argument to purchase licenses. These were not cheap but purify was so vastly superior to the other tools that the case could be made. Then the Rational purchase obstacles kicked in. One had to be determined. Then IBM acquired the product. If it was hard to buy from Rational it was almost impossible with IBM. And they neutered the demo/trail version, effectively making it so that it only spread by word of mouth. One could no longer use the trail version to evaluate it.<br />
<br />
Fast forward to January 2015. IBM sold Purify to UNICOM. This sale has been disastrous for all users of purify. UNICOM no longer sell it. Instead they sell a product called PurifyPlus, which is a bundle of other tools developed by Pure Software and extended by subsequent owners. These tools are Quantify and PureCoverage, for performance and code coverage analysis respectively. These are and have always been good powerful tools. For some users it made sense to bundle them because if all three were desired the overall license fee was cheaper. Now there is no choice and buying all three is most definitely not for everyone. But there's more. You used to be able to purchase as many licenses as you wanted, from a single license to site-wide. Now UNICOM have made it so that the minimum number of licenses is FOUR. This makes it very expensive. Also a years support fees is compulsory. I recently got a sales quote for a client of mine and the quote was for over TEN THOUSAND dollars. Needless to say at that sort of price it was game over.<br />
<br />
After discussion with some of my colleagues I have come to conclusion that UNICOM want to kill the entire product suite off. Why else would they only sell it to large enterprise outfits to whom tens of thousands of dollars for software purchases are as nothing? Effectively purify is dead. This is a serious problem for the development and maintenance of legacy C++ programs. <br />
<br />
It's not a problem for any new C++ software development. Just start developing it on LINUX where valgrind is available. But valgrind will never be available for Windows. The problem is trying to purify a large Windows C++ program that cannot be ported to Linux (and where there may not be any need or desire to do so).<br />
<br />
So I no longer recommend purify. It is consigned to the dustbin of history. What a pity, it was a fantastic tool right to the end.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com6tag:blogger.com,1999:blog-15640890.post-31712428696060867402015-07-02T16:25:00.001+00:002015-07-02T16:25:30.116+00:00The wonders of semantic versioningMany years ago, in the dim and distant past, I used to work for Prime Computer Inc. They don't exist any more. They had a very good policy when it came to versions of their operating system. They used the familar major.minor.fix convention for denoting the version but were very strict about what this meant. The version numbers were always numbers, never strings, and you could and were supposed to infer things from the numbers. These inferences told you what versions were compatible with what other versions. They also told you about scale and kind of changes between versions. Sadly the industry as a whole doesn't do any of this in general. In fact, until recently, Prime was the only case I knew of that ever did this properly. Then I came across something called Semantic Versioning. See the web site at http://semver.org. This describes exactly what was done at Prime. How jolly sensible. Let's hope this catches on.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-28297881643887717292014-06-10T21:31:00.000+00:002014-06-10T21:31:08.630+00:00C++ code to dump memory in hex and ASCIIEvery now and then I have the need to dump a block of memory as hex and ASCII in some C++ code I am working on. Each time I google for some code I can snaffle to do the trick. Each time I find something of very mediocre quality which will just about do for the occasion. Well, I finally got sick and tired of this and now I am publishing <a href="https://docs.google.com/file/d/0B9u5XJqGQXg7dVBiMG5vS3dsVG8/edit">my own solution</a>. I hope that people find it useful.<br />
<br />
Many thanks to ECI training for <a href="http://www.youtube.com/watch?v=657oPhP0158">this YouTube video</a> on how to do file attachments in blogger.<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0tag:blogger.com,1999:blog-15640890.post-82392426524690453842014-06-01T11:51:00.000+00:002014-06-01T11:51:50.502+00:00TrueCrypt has gone! OMG!!!My hard drive failed so this weekend I started to migrate my stuff over to a new machine which didn't have much stuff set up on it. One of the missing components was truecrypt so I thought I would just download and build from source. What a shock awaited me - truecrypt has gone! See the wikipedia page that describes how it went on 29th May 2014. There was also an <a href="http://guardianlv.com/2014/05/truecrypt-users-seek-new-software/">article in the Guardian</a>. I am writing this on Sunday 1st June 2014. This was not how I wanted to spend my Sunday!<br />
<br />
Conspiracy theories to one side, the practical question remains, what does one do if one wishes to continue using truecrypt as it was? The answer seems to be to build from the source of the 7.1a. Download it from the <a href="https://www.grc.com/misc/truecrypt/truecrypt.htm">final release repository</a>. However, there is more to it than that. It doesn't build cleanly. I found someone else who was trying to do what I wanted to do, Reinhard Seiler. He <a href="bhttp://reinhard-seiler.blogspot.co.uk/2012/07/compile-truecrypt-on-raspberry-pi.html">blogged about his build experience</a>. However, this was on a raspberry Pi and I had some different problems. Here's what I found:<br />
<br />
<ul><li>The build requires nasm, yasm won't do. No problem, I installed it via synaptic package manager.</li>
<li>SecurityToken.cpp failed to compile due to missing PKCS11 header files. I followed Reinhard Seiler's instructions, placing the headers from ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-11/v211 into a sub-directory of my truecrypt source. This is so I could copy the entire directory if this ever happens to me again (i.e complete install on new machine needed).</li>
<li>I got compilation errors due to missing macros such as CKR_NEW_PIN_MODE. Luckily, I found a blogger who had hit the same problem and <a href="http://www.lucidelectricdreams.com/2008/12/truecrypt-61-install-guide-for-fedora.html">posted a solution</a>. Basically you ifdef out the offending lines. It is safe to do this since it is only error message handling.</li>
<li>Once it got past the PKCS11 errors I found that it needs fuse. I installed libfuse-dev from synaptic package manager.</li>
<li>The final compilation errors came from the GUI bits where it depends on wxWidgets. Synaptic to the rescue!<br />
<li>Finally it built. But then I got an error at runtime along the lines of "Failed to communicate with kernel device mapped drive". I had done a rather large synaptic upgrade without bothering to reboot. Apparantly this kernel mode was affecting truecrypt so I was forced to reboot. Then it worked! Hurrah!</li><br />
</ul><br />
Once I had a working version of truecrypt I copied the entire build directory to my external USB backups directory, ready for the next time I need to install truecrypt on a new machine.<br />
<br />
Now I will just put on my tinfoil hat briefly. I reckon that it is a conspiracy that truecrypt has gone. The developers say that the tool is not necessary now that Microsoft have BitLocker but this just doesn't wash. For a start I am on linux! And second, BitLocker is closed, secret, proprietary, so there is bound to be an NSA backdoor. Now I will remove my tinfoil hat and go and get a nice cup of coffee!<br />
Andrew Marlowhttp://www.blogger.com/profile/15677162551542366263noreply@blogger.com0