Sunday, February 18, 2024

Jenkins, git and ssh in a corporate environment

I reinstalled a later version of jenkins in order to dodge a CVE and found that git clone would no longer work. The terminal that started jenkins was getting messages prompting for the git ssh passphrase. The jenkins job just sat there on the git clone command without making any progress. I puzzled over this for ages. The previous version of jenkins had been working fine. I restarted the ssh agent but it had no effect. I googled to find out how to change my ssh credentials such that I had no passphrase (ill-advised though that may sound) and found articles claiming it was impossible. Well, it turns out it is possible. I did it and the jenkins problems went away. I don't like having an empty passphrase, it seems like bad practise, but hey, ho, needs must. So here's how I reset the passphrase to be empty: the ssh-keygen -p command prompts for the current passphrase. Enter it, then when ity asks for the new one (and confirmation) just hit return. Job done.

Friday, June 23, 2023

How to display markdown files from the linux command line

It took quite while to track down how to do this. When you google for it you find GUI commands but not much for the command line. There are several tools but I have chosen one that works with what is available via the standard Red Hat repo for RHEL8. I use it even though my own machine is running mint 20.1. Going for something that is easy to install on RHEL8 means there is more of a chance that it will work in a corporate environment. The command is called mdo and it is written in python. It can be pip'd into your virtual python environment. It requires prior installation of another component called rich, which can also be pip'd in. This is the great attraction of utilities written in python. They can be pip'd into your virtual environment and thus do not require root access to make them available. These components are on github at https://github.com/eyalev/mdo and https://github.com/Textualize/rich .

Monday, August 29, 2022

Many forks on github projects

When a project is not updated very often or goes by for years with no official updates, forks can proliferate. Then people who arrive at the site may want to know which forks are active. Luckily, there is a github project for solving this problem! It is called ActiveForks. If you go to https://techgaun.github.io/active-forks/index.html you can enter the name of the github project and you will get a table of results, with the ability to sort on any of the presented columns.

Monday, August 01, 2022

Windows and directories that cannot be (easily) deleted

If a directory contains nodes whose full pathname is greater than around 255 characters then Windows has tremendous difficulty deleting such a directory. But luckily, there is an easy way out. The 7-Zip command comes with an additional executable, 7zFM.exe which is the 7-ZIP File Manager. I recommend you put an icon for this on your desktop. It works a bit like a file explorer with one significant difference. If you click on a directory and enter shift-delete then it will delete that directory even if other commands fail due to the 255 problem.

Sunday, March 27, 2022

Function parameters that are fundamental types passed by value and const

The rule is to not do this in the header file. Some people say don't do it in the cpp file either (I am in that camp) but this does seem to be a matter of opinion. See the abseil article https://abseil.io/tips/109 for a discussion.

[[Addition: April 2025]]

I went to ACCU 2025 where there were talks on contract assertions, a feature that has a bearing on this area. As of April 2025 contract assertions do not distinguish between function parameter values on entry and any modified value on exit. So there is no way make clear if a post condition variable refers to its value before or after. In other languages such as Eiffel, which have DbC built into the language, there are ways. Apparantly, this will be dealt with in C++, maybe by C++26. But in the meantime when one passes a fundamental type by value and does not use const in the cpp file, the value can be altered during function execution, since C++ is a pass by copy language. With such modifications it is possible for a post condition to inadvertently test a modified value. The standards committee are aware of this issue and suggest a remedy that I don't like at all. They say one should decorate such parameters with const. Since preconditions and postconditions are expressed on the function signature this means putting the const there. For symmetry I suppose they would also say that they must also be in the cpp file. I really don't like this, so my position of function parameters that are fundamental types passed by value stands. I will just have to wait for c++26.

Windows, X11, cygwin, fonts and Xming

For years I used the X11 server that is part of cygwin. It seemed to be a bit flakey but there didn't seem to be anything better. Every now and then I would run into a problem where it would seem to work but xterm would complain about missing fonts. So, I downloaded and installed xming-fonts (from https://sourceforge.net/projects/xming/files/Xming-fonts/7.7.0.10/Xming-fonts-7-7-0-10-setup.exe/download) on my local node (not the node that was running xterm) and that fixed the error. These days I no longer use the cygwin X11. I use XMing: see http://www.straightrunning.com/XmingNotes.

Monday, May 31, 2021

Software Development links and comments

Intro

I am in the process of decommissioning my website and moving my notes on software development and suchlike to my blog here.

ACCU

I am an active member of ACCU (the Association of C and C++ Users). It's been a long time since I had anything published by them. There are a couple of articles a few book reviews.

C++ Coding Guidelines

Many years ago I started to write a book on this. It was never published. I did discuss an early draft with Addison Wesley but they did not show any interest. I discussed this with some ACCU people and the theory put forward was that maybe they had been approached by other authors on the same subject. About a year later Sutter and Alexandrescu had their guidelines published. Their book is very good and I recommend it. Their book is much better than what I was working on.

In a corporate environment I would never bother with a coding guidelines document these days. They are never read, never enforced, and can become out of date very quickly. They are also a rich source of arguments and ill-feeling. There has to be a better way. There is. It is called clang. I would have a jenkins job to use clang-format to format the code. That would take care of all whitespace and brace arguments. And I would use clang-tidy static code analysis (SCA) to find the more serious coding issues. There would be a jenkins job to ensure that the code was always SCA-clean. clang-tidy is not the easiest program to run since it needs to know what compiler options are used and that includes macros and the places where to look for include files. I have found that it helps to write a python script to take care of these things. It is worth the effort.

Sourceforge

Here are my own projects, hosted on SourceForge. They are old and have fallen into disuse really. If I was going to maintain them I would probably start by relocating them to github.
  • LAUM - Development has stalled. I hoped it would eventually it will be a suite of applications to help in the administration of groups of machines. The whole thing has been made a bit obsolete by docker and kubernetes.
  • FRUCTOSE - wrote an LGPL'd C++ unit test framework. The main motivation was a simple, header-only framework that does not depend on boost. However, these days I recommend that people go with the Google unit test framework (gtest).
  • Cyclic Logs - wrote a GPL'd package to provide cyclic logfiles. I think this does still have a practical use in environments where the disk space is constrained.
  • DepDot - wrote a GPL'd command (perl script) to show cyclic dependencies among libraries.

TeX

I am a keen user of TeX, via the LaTeX variant created by Leslie Lamport. I have been a member of the UK branch of the Tex Users Group for several years. I tend to produce most of my documentation using LaTeX. This allows me to produce PDF and postscript files (via DVI conversion programs) and RTF files (via latex2rtf). The RTF format is an open format but due to its close integration with Microsoft Word for Windows it is useful for people that require documents to be in a Microsoft format. I used to use latex2html to create web pages from my LaTex documents, but have now found that HeVeA does a better job and is much faster. It is written in oCamL. For many years I experimented with alternatives to using LaTeX directly, flirting briefly with DocBook, and other approaches. I now conclude that there is just no substitute for writing in LaTeX directly.

CORBA

I feel great nostalgia when I think of CORBA. I liked it for a very long time. I was interested in CORBA right from the beginning (i.e. when the standard was so embryonic, CORBA would not even interoperate with itself!). Despite the complexity of the standard, I still think CORBA had a lot to offer. I have used several ORBs, some open source, some proprietary. My favourite used to be MICO but unfortunately the support for multithreading is still not finished and development petered out around 2017, so TAO (the ACE ORB) is now the winner. I have also looked at JacORB by Gerald Brose. The best proprietary ORB (IMO) was Orbix from IONA (now owned by Progress).

For those interested in CORBA I recommend heading over to the web site of Ciaran McHale (, a former IONA consultant whom I have worked with before. He has a free book there which I think provides a great practical introduction to programming with CORBA.

However, despite the nostalgia I have to admit that CORBA has had its day. The Rise and Fall are well documented by Michi Henning, see https://cacm.acm.org/magazines/2008/8/5336-the-rise-and-fall-of-corba/fulltext. Unfortunately there does not seem to be anything trying to replace it, except possibly ICE from ZeroC. It is Open Source, which is obviously a good thing, but be advised that the the license is GPL and so does not permit use in proprietary products (a separate license agreement is available with a purchase cost). If I was ever asked to work on a project where there was a need for some kind of service interface I would probably make it a web interface. That's the current fashion at the time of writing (2021) and there are umpteen frameworks. I would probably choose gRPC with Web Assembly. I would never use SOAP and I would be wary of REST.

Free Software and Open Source

Projects that I have contributed to include:
  • DoxyPress
  • PoCo
  • ACE
  • OpenSSL
  • I did some work on ESNACC, an extended version of SNACC, an old ASN.1 compiler. ESNACC started because SNACC was an old orphaned project with no support for either C++ or DER and PER (SNACC was old BER only). Sadly, work on ESNACC gradually fizzled out.

I have been an associate member of the Free Software Foundation for many years.

I admit that I am not consistent when it comes to the ideals of the Free Software Foundation. I agree with the FSF in the same way that I agree with vegans. I know that unless one is a vegan one is supporting the animal food industry, which is full of cruelty and suffering. But I just can't go vegetarian, let alone vegan. I won't go into the reasons here. I know that I am supporting animal cruelty and I am not happy about it, but it is not going to change any time soon. In a similar way, despite the good things I find in the FSF, I am, unfortunately, supporting the proprietary software industry. My job involves the development of proprietary software and this has been the case my entire working life. That is not going to change (i.e. I am not going to have a change of career). I find the best I can do is to promote open source in the workplace. I know this is a rather feeble thing. After all, we know that Free Software and Open Source are different movements with different goals. But in my opinion the software industry as a whole will never understand the importance of Free Software. They are beginning to understand Open Source and that's better than nothing.

ASN.1

I really like ASN.1. I was first introduced to it way back in 1984 when the encoding standard was called .X409. It was used on Prime Computers for some of its client/server software and proved to be a boon when the protocol had to change, due to the use of sets and version numbers. Sadly, I have not seen it used much since, except of course in a few standard internet protocols.

I found out there is effectively a replacement for ESNACC, asn1c, which seems to be significantly better than either SNACC or ESNACC. I haven't played with it yet. I wonder if I ever will.

There is a useful book on ASN.1 that you might find interesting.

Heroes of software

There are so many potential heroes for a computer geek to look up to, but my favourite is Alan Turing. He is regarded by many as the father of computer science. He is particularly admired by many of us in the UK for his work at Bletchley Park. Turing's work there was part of the outstanding effort in decrypting German messages during the Second World War.

Thursday, October 22, 2020

Java has finally got strong crypto

For a long time now America has treated strong crypto as akin to munitions; a deadly weapon that must not be allowed to fall into the wrong hands. For the background to this, see the wikipedia page at https://en.wikipedia.org/wiki/Export_of_cryptography_from_the_United_States

The wikipedia page indicates that this attitude was significantly lessened in 1992 but the sad fact is that is persisted well beyond that for java. The Oracle release notes for JDK8 at https://www.oracle.com/java/technologies/javase/8all-relnotes.html say that the restricton was removed in January 2018, in update 161. The change was also backported to JDK7 in update 171.

This means that java projects using JDK8 had better move to at least this update version if they have not already. Of course, users of OpenJDK probably never had a problem and certainly don't now.

The way I ran into this problem was during work on a trade feed that uses the FIX protocol. The FIX session was secured with TLS1.2. everything was fine until one day the remote side changed from a weak crypto algorithm to a strong one. Our side failed with a mysterious SSL handshake error. This came from the mina package, as used by quickfixj. Mina which doesn't seem to handle this situation well at all. We had to turn on packet level logging via the JVM option -Djavax.net.debug=all to see what was happening. The log showed that the remote side wanted to use a strong algorithm but that many algorithms on our side were disabled. At the time the latest JDK8 update from Oracle was update 251. I switched to that and then all those messages about unknown algorithms disappeared and the algorithm preferred by the remote side was accepted. Everything started working again.

Saturday, September 05, 2020

Windows password change through different levels of RDP

There is conflicting and incomplete information on how to change your Windows password where RDP is involved. It turns out that the thing to do changes depending on how many levels of RDP are involved. Here's what I found:
  • No levels of RDP. This is the simple case. Just Ctrl-Alt-Del then click on change password.
  • One level of RDP. Just Ctrl-Alt-END (that's END, not Del) then click on change password.
  • Two levels of RDP. You need to send Ctrl-Alt-Del to the machine at the end of the RDP chain but typing that will do it on the top level machine. Ctrl-Alt-END will do it to the machine at the second level of RDP. So you have to use the OSK command on the target machine to get an On Screen Keyboard. Then type Ctrl-Alt and click on the Del button on the OSK display.
If you google to find out how to solve this problem the most common reply is Ctrl-Alt-END. Here people are assuming there is only one level of RDP. It is very annoying that what you have to do depends on how many levels of RDP there are.

Monday, August 31, 2020

Case-insensitive ext4: just say no!

I am dismayed to learn that ext4 was changed in linux kernel 5.2 to be case insensitive (strictly speaking, to allow it as an option). This is truly terrible and will come back to bite us all. See this kernel.org posting for details. But here are just a few thoughts: Is it really going to be case insensitive? I doubt it. There are some environments in which there is a requirement for filenames to contain both uppercase and lowercase characters. Java springs to mind where the filename maps directly to the class name. Of course one could start to code entirely in lowercase but what about those classes that have already been written? What happens when the source is moved to an ext4 partition that has this feature? I strongly suspect that when they say case-insensitive what they actually mean is case-preserving, like MS-Windows. The fact that this has not been called out shows that the functionality has not been considered very deeply. People have two confused two unrelated issues: the issue of filenames supporting case and the issue of applications making the case of filenames irrelevant or not. If the filesystem is case-preserving then applications will still need to cope with this by being case-blind where they think this is what the user wants. Putting this into the file system itself is completely wrong. IMAO. There are several changes being made to linux which I don't like and this is another in a growing list:
  • systemd. I notice now that more and more linux software that is available through a distro's package management system is dependent (transitively) on systemd. I anticipate a day where practically every package has this dependency.
  • The Out Of Memory (OOM) Killer. I've already blogged about this.
  • btfs not supporting datetime last accessed.

Saturday, October 26, 2019

Building open source C++ libraries on Windows for 32 bit and 64 bit

In my experience, most open source C/C++ library projects don't do a good job of providing the ability to build the library in all four builds, i.e. all combinations of release mode and debug mode with 32 bits and 64 bits. These days it is usually just 64 bit and sometimes it's just 64 bit release. To get all four build modes one has to start hacking but there is a little gotcha that nobbles me every now and then, so I thought I would blog about the solution so I never have to strain my brain to remember it in future. I can just go to my blog.

Add a new configuration from the configuration manager. Pick Win32 from the pick list and say you want to inherit from the 64 bit configuration (that's so you get all that the 64 bit configuration has). This configuration will claim to be 32 bit but there will be a problem. The linker will be set for 32 bit but the compilation will be in 64 bit. This is not apparent from the settings dialog. So when you build you will see an error like:

fatal error LNK1112: module machine type 'x64' conflicts with target machine type 'X86'

To fix this, edit the Visual Studio project file, removing this line from the 32 bit sections:

<AdditionalOptions>%<AdditionalOptions> /machine:x64</AdditionalOptions>

That's it!


Saturday, March 16, 2019

I have converted at long last to 1TBS

After decades of firm adherence to the Allman brace style I have finally changed my mind. I am now in the 1TBS camp.
Here is my reasoning, it is all a matter of my own personal opinion of course. The stuff below is not trying to make a logical reasoned argument for 1TBS in general, just why I changed my mind.

IMO Allman is useful to show where code blocks begin and end in legacy code where functions ramble on and on as they grow uncontrolled and undisciplined over the years. Such source often contains a random mixture of tabs and spaces.
It is quite hard to see where the scope blocks are under these conditions. Reformatting Allman style makes such code clearer than it would otherwise be. Using an Allman style on new code allows it to grow in an uncontrolled way where blocks just get bigger and bigger, rather than being refactored. When this happens the use of Allman means the blocks that would otherwise start to become much harder to see stay reasonably visible.

So that's why I used to think Allman was good. It allowed blocks to be more easily seen in ancient crufty code and it allowed new code to become crufty while still preserving some ability to see the blocks. But I have now decided that this is not a good reason to prefer the Allman style.

We all know that advocates of 1TB say it makes the code shorter, and it does. And shorter code has become much more fashionable over recent decades. The shorter the code the smaller the blocks and once a block becomes only a few lines the Allman style makes such blocks unnecessarily longer. I see this in Java code where 1TBS is the dominant layout style and shorter functions are practised much more than in C++. Maybe it's because alot of C++ code is ancient and in the dim and distance past it was more normal to write long, rambling unfactored functions. Java hasn't been around long enough for such cruft to accumulate to the same degree. Plus it came along later during which time shorter functions became more fashionable. When was the last time you saw a Java function that rambled on for hundreds or even thousands of lines. I wouldn't be surprised if you've never seen one. But we've all seen it in C and C++.

So, if I ever get the luxury of working on a new C++ project and I get any say in things like layout style, I would advocate 1TBS. The project would have short functions and the minute a function looks like growing to the point where the blocks start to become less visible it would have to be refactored. This is not just a matter of making it look pretty. The argument for refactoring would be that the result would be easier to test and it would be easier to reason about code coverage, as well as being easier to understand. This is already standard practise in Java, thank goodness.
It ought to be standard practise in any programming language.

During my conversion to 1TBS I have been working with python. This is a language where the issue has been designed away. How jolly sensible. Why don't all new languages learn from this? Python seems to be the only one. Every time some new language comes out it is inevitably based in C++ regarding layout. Java copied this and so has almost every other language since.

There is another factor which led to my 1TBS conversion: I have been working on a project where Allman is the standard, but with a twist. Single statements must not be surrounded with braces. We all know how potentially dangerous that is. It can lead to the dangling else problem, and has done on that very project, a fact revealed by a clang-tidy analysis.
It can also cause a problem when a developer changes the code to make the block more than one statement. These issues just don't arise when one uses 1TBS.

So, how is this change in belief going to affect my programming life? Hardly at all, unless I write stuff on my own (e.g. updating my sourceforge projects or creating new ones). After all, I am in an environment in which
1TBS is forbidden and its use would probably harm the code base as it would make the blocks in long rambling functions even harder to understand than they already are. I can't even use 1TBS in the java code, since Allman is mandated there as well.

Sunday, January 06, 2019

A great C++ blog I've found

I've found a great C++ blog and I thought I just had to mention here it. There are lots of goodies about C++17 and C++20 and it keeps track of well known players in the industry and what they are up to, e.g. people like Barnje Stroustrup, Herb Sutter, Nicolai Josuttis and John Lakos. The blog mentions high profile features and plans for C++ including things like the inclusion of Howard Hinnant's date library into the standard and the adoption of contracts. I encourage everyone to take a look.

Saturday, December 29, 2018

The insane OOM (out of memory) Killer

In the late nineties I worked on AIX for the first time. Back in those days there were several flavours of Unix available, all with their differences and idiosyncrasies. Linux was a fledging and fitted on just one CD. I came across a feature of AIX which I thought was crazy - the OOM (out of memory) killer. In this variant of Unix malloc always succeeded, even when there wasn't enough memory. The idea was that malloc returned a pointer to heap memory but wouldn't actually start to use it until the first reference was made. At the point at which it did then memory had jolly well better be available. If it was then all well and good. If not then the OOM killer came into play. The OOM killer would choose a victim process and kill it. The result was that memory would be freed and the access occurring at the time would succeed. Sounds insane, right? Right. I laughed and thought that this one feature rendered AIX useless compared to the other Unixes and would lead to its demise. How wrong I was. Fast forward a few years later. It was added to Solaris. Sigh. Fast forward to today. It has been added to Linux.

The OOM killer is a kernel development that mirrors what happens when banks try to innovate. It's what I call "the conspiracy of crappiness". It goes like this: some group or other tries to innovate but comes up with a really bad idea that doesn't work well and everyone hates it. The competition discover the move and for some inexplicable reason they copy it. Now everyone hates the competition as well and none of the players can be distinguished in this area. Bank charges on current accounts is an example. So is charging for withdrawals at ATMs (although customers have objected so vehemently to that one that there has been some back peddling). Well, in the world of Unix we now have the OOM killer.

There's a good article at LWN, that explains why this is insane. There's another article that gives tips on how to mitigate the nastiness, but surely that it yet another testimony to the fact that it is nasty. I also came across this article that discusses the nastiness and has an excerpt of .an amusing article that discusses the fairness, or otherwise, of how the victim is chosen. Here is the excerpt:


An aircraft company discovered that it was cheaper to fly its planes with less fuel on board. The planes would be lighter and use less fuel and money was saved. On rare occasions however the amount of fuel was insufficient, and the plane would crash. This problem was solved by the engineers of the company by the development of a special OOF (out-of-fuel) mechanism. In emergency cases a passenger was selected and thrown out of the plane. (When necessary, the procedure was repeated.) A large body of theory was developed and many publications were devoted to the problem of properly selecting the victim to be ejected. Should the victim be chosen at random? Or should one choose the heaviest person? Or the oldest? Should passengers pay in order not to be ejected, so that the victim would be the poorest on board? And if for example the heaviest person was chosen, should there be a special exception in case that was the pilot? Should first class passengers be exempted? Now that the OOF mechanism existed, it would be activated every now and then, and eject passengers even when there was no fuel shortage. The engineers are still studying precisely how this malfunction is caused.

Update: 27 March 2022

Since that aircraft analogy I have found an article on the perils of overcommit which gives a more dispassionate assessment, but still concludes it is a terrible idea: https://www.etalabs.net/overcommit.html

Sunday, December 02, 2018

I can't stand the JBoss Application Server

I wonder which application server people chose when working on Java projects that need to publish dynamic web pages. I have used tomcat in the past and found it to be pretty good. But for the last few years I have been in an environment where JBoss was chosen. JBoss comes with all sorts of enterprisey EE things such as a JMS implementation and whilst initially this may seem attractive I have decided that I don't like it. I would now recommend that any project that needs JMS and dynamic web pages avoids an enterprise application offer. Instead I think it is better to chose the web page and JMS solutions separately.

Years ago I wrote a book review for ACCU on a JBoss tutorial book. I gave the book a bad review because it was largely XML fragments concerning JBoss configuration. But I now see that this is what struggling in a JBoss environment is all about. I still think the book was wrong to have such large XML sections though. The precise XML needed to make JBoss do what you want seems to wibble depending on the exact version of Jboss you have and also possibly on what colour socks you are wearing. But it gets worse. Recently (wrt the time of writing this, December 2018) JBoss went proprietary. Red Hat now calls it JBoss Enterprise Application Platform or JBoss-EAP for short. Not to be confused with the old open source version which was just called JBoss. In an attempt to deal with the confusion Red Hat renamed the old one to Wildfly and open source development is now done under that name. Wildfly does seem to be much better than JBoss but it's all relative; it is still derived from JBoss and so still suffers from the tremendous environmental difficulties caused by obscure and constantly changing XML configuration.


So, for people who want JMS and web pages with dynamic content, I recommend ActiveMQ and Apache Tomcat respectively.

Saturday, November 25, 2017

Veracrypt instead of Truecrypt

Back in June 2014 Truecrypt died, but I and many others were able to build it from the source. I blogged about this before. Recently I had to access a couple of truecrypted volumes but found that my copy of truecrypt no longer worked. It relied on an old version of GTK that was no longer on my system. After some fruitless attempts to restore the required version of GTK2 I decided to try out VeraCrypt, which is the successor to Truecrypt.

Veracrypt is everything that Truecrypt was, and more. Fully open source, multi-platform strong encryption with optional plausible deniability and compatibility with Truecrypt. After installing yasm and libfuse I was able to build Veracrypt from source with no trouble at all. And it works. It was able to read my old truecrypt volumes. It also works on my Windows laptop. Wonderful! I have now switched over to Veracrypt.

Thursday, October 19, 2017

How to find which dependent DLL can't be found

A couple of times in the last few years I have faced a knotty problem to do with DLLs on Windows. The first occasion was when a large complex program was performing that Win32 function ::LoadLibrary and it failed to find a sub-dependent DLL. A more recent case was where a java program called loadLibrary to load a shared C++ library used by a JNI interface. This library load also failed. Both failures were silent and mysterious. No details are given, just that the load failed.

I googled for help and asked friends and colleagues. The answer that came back again and again was to use the Dependency Walker at http://www.dependencywalker.com. Well, it turns out that every time I used that program to solve the riddle it was no help at all. I have now found a more reliable way, thanks to a tip via the ACCU general mailing list. I wrote a little C++ program. Before you run the program set PATH to the value it would have in your particular problem situation. The program waits for the user to hit return, then it calls ::LoadLibrary on the library name supplied. What you have to do is run the program (with PATH set appropriately) and while it is waiting for you to hit return, run Procmon from SysInternals. Enter the pid for the LoadLibrary program and set a filter for Operation to QueryOpen. Then hit return so it tries to load the library. The Procmon windows will then fill with all the file access attempts made to resolve the DLLs.Bear in mind that it is using PATH to locate the DLLs so there will be several access failures. The thing to do is check each leaf DLL name and find the case or cases where it failed to find the DLL no matter which PATH directories were searched. That's it, you have found which DLL failure(s) occurred!

It is a shame there is no more convenient way to deal with situation. If only the logic of calling ::LoadLibrary could be combined with the logic in Procmon that gets all the OpenQuery cases with the pathname and whether or not the access worked, all in one program. Maybe one day someone will write such a program, but in the meantime this solution will have to do.

Saturday, April 15, 2017

Linux Mint 17 and scroll bar arrows

A while ago I did a complete reinstall of my desktop machine using Linux Mint 17. One of the first things I noticed after doing this was that the arrow bars that normally appear in conjunction with the scroll bar had disappeared. I deemed this a minor irritation and didn't do anything about it. But more recently I investigated why this was and what to do about it and found various blogs etc where people were complaining of the same thing and offering various solutions. I only found one solution that actually worked and give details on it below:

* Ensure that your changes are made to the Mint-X theme. You need access to the theme selector. Click on the Linux button (bottom left hand corner) and click on Settings. In the right hand menu pane click on Appearance (with the jacket and tie icon). This shows the theme selector when you pick the first tab, Style (which is the default tab). When you click on a theme it is immediately selected. There is no need to logout, reboot, or anything else. On selecting a theme the theme config files are read and processed. Therefore when you edit the theme files, use the selector to pick any theme *other* than Mint-X, then click on Mint-X again to pick up your changes.

* As root, edit the theme files. These are found under /usr/share/themes, so for Mint-X the directory is /usr/share/themes/Mint-X. There are sub-directories for gtk-2.0 and gtk-3.0. My edits were done to gtk-2.0. The file there is called gtkrc. Make a backup copy of the file first. Ensure your file contains the following:

GtkScrollbar::has-backward-stepper = 1
GtkScrollbar::has-forward-stepper = 1
GtkScrollbar::stepper_size = 18
GtkScrollbar::min-slider-length = 30
GtkScrollbar::slider-width = 18
GtkScrollbar::trough-border = 1
GtkScrollbar::activate-slider = 1

The crucial line turns out to be:

GtkScrollbar::stepper-size = 18

Without that line, no scroll bar arrows.
I found this tip on Linux Questions at http://www.linuxquestions.org/questions/linux-mint-84/question-how-to-enable-scrollbar-arrow-buttons-in-linux-mint-v17-3-a-4175580868/.

Sunday, February 26, 2017

Using Joda time to handle date+time+timezone

I have been working on a client-server system where the client and server are in different timezones. The client is on London time, the server is in Los Angeles, a difference of 8 hours. This means that at the end of the business day in LA it already the next day in London. The API that I have to use contains functions with parameters of type Calendar. As we know, Calendar is a date+time+timezone triple, and the timezone defaults. This means that if the timezone is not explicitly specified then it will change its meaning as it goes over the wire.

I have been using the Joda datetime package to help me and after a bit of struggling eventually came up with the example program below which shows the construction of Joda DateTime objects for a specific date+time+timezone which is displayed correctly in both timezones. The program also shows how to construct Calendar objects from them for the correct timezone.

       
package jodaexample;

import java.text.SimpleDateFormat;
import java.util.Calendar;

import org.joda.time.DateTime;
import org.joda.time.DateTimeZone;
import org.joda.time.format.DateTimeFormat;
import org.joda.time.format.DateTimeFormatter;

/**
 * 
 * @author marlowa
 * This example shows a time of 18:30 PST which is 8 hours behind UTC.
 * This means the date+time+timezone is 01:30 the previous day in UTC,
 * or 02:30 in BST.
 */
public class example {

    public static void main(String[] args) {
        System.out.println("Joda timezone example program.");
        DateTimeZone mytimezone = DateTimeZone.forID("America/Los_Angeles");
        DateTime mydatetime = new DateTime(2017, 3, 31, 18, 30, 0, mytimezone);
        String formatString = "yyyy-MM-dd HH:mm:ss z '('Z')'";
        DateTimeFormatter dtf = DateTimeFormat.forPattern(formatString);
        System.out.println("DateTime (local, i.e. behind UTC) = "+dtf.print(mydatetime));
        Calendar datetimeInAmerica = mydatetime.toGregorianCalendar();
        SimpleDateFormat sdfInAmerica = new SimpleDateFormat(formatString);
        sdfInAmerica.setCalendar(datetimeInAmerica); // to set the timezone.
        System.out.println("Calendar inAmerica                = "+sdfInAmerica.format(datetimeInAmerica.getTime()));
  
        long dateTimeMilliseconds = mydatetime.getMillis(); 
        int millisecondsOffset = mytimezone.getOffset(dateTimeMilliseconds);
        System.out.println(String.format("Milliseconds = %d, offset = %d", dateTimeMilliseconds, millisecondsOffset));
  
        long millisecondsInTimezone = dateTimeMilliseconds+millisecondsOffset;
        System.out.println("millisecondsInTimezone            = "+millisecondsInTimezone);
        long millisecondsInUTC = mytimezone.convertLocalToUTC(millisecondsInTimezone, false);
        DateTime dateTimeUTC = new DateTime(millisecondsInUTC, DateTimeZone.UTC);
        System.out.println("DateTime (UTC)                    = "+dtf.print(dateTimeUTC));
        Calendar datetimeInLondon = dateTimeUTC.toGregorianCalendar();
  SimpleDateFormat sdfInLondon = new SimpleDateFormat(formatString);
  sdfInAmerica.setCalendar(datetimeInLondon); // to set the timezone.
  System.out.println("Calendar inLondon                 = "+sdfInLondon.format(datetimeInAmerica.getTime()));
 }
}
 
 

Wednesday, May 25, 2016

The cause of MIDL error MIDL2398

Whilst working on a Windows Visual Studio (VS) project that uses MIDL (Microsoft IDL) I suddenly started getting the error MIDL2398 during the build. This was for no apparent reason. I tried the usual things, logging off, rebooting, cleaning the VS project, blowing my SVN checkout away and doing a fresh build. Sometimes this worked and sometimes it didn't. This was driving me nuts. Googling didn't yield much. Other people were also seeing this problem but no-one explained it and several people said that when they re-installed VS it went away. Yeah, right.

After a few hours of these failures I noticed something. The problem seemed to happen roughly on the hour. Then I remembered. I had recently set up a jenkins job to run on the hour. The jenkins job was building something else in an unrelated area but the coincidence seemd too great to ignore. I disabled the job. It turns out that the jenkins job was failing and during the failure logging it tried to log to a logfile without using double quotes around the filename. Since it was a jenkins job with the default jenkins install directory, the pathname started with "C:\Program Files (x86)\Jenkins". This was causing the logfile "C:\Program" to be created. The jenkins job, being run by jenkins, had administrator privileges so it was allowed to write this file to the root directory. When my VS got to the COM bit where it runs MIDL I got the error.

The fact is that the presence of the rogue file "C:\Program" kills MIDL with this weird error. Well I never. And it turns out to be easier to accidently create this rogue file than you might think.