Sorted for Excel and Whee!

If you happened upon the VBA Lamp and by rubbing it were able to produce the Excel VBA Genie and were granted a VBA Wish, would you ask for a built-in Sort() function?

If you build the kind of Excel apps that I do, you’ll run up against the need for a Sort all too frequently. Sorting arrays with sizes in the tens of thousands is not unusual and I was reminded of this when reading a post in this entry at Andrew’s Excel Tips the other day.

While it’s not crucial in the (useful) idea presented, the code has a quick and and dirty sort that is about the worst sort algorithm1 one could intelligently come up with. It’s also an prettty intuitive solution, which just goes to show that sorting may not be as simple as we may think. I’m not attacking Andrew’s skillz here, btw: the code as presented is certainly fit for purpose; it’s not presented as a general-purpose utility (at least I hope it isn’t).

I’ve accumulated a few algorithms over the years and I’ve coded up a couple more while “researching” this piece. On the one hand, we have the oft-derided BubbleSort and its close relation, CocktailSort. In the same group I’d include InsertionSort and SelectionSort. I’m going to be harsh and categorise those, with the naive sort above, as “Slow”. Well, “mostly slow”, as we’ll see.

In the “Fast” group, we have the much-touted QuickSort, and somewhere in between, we have HeapSort,and my current algorithm of choice, CombSort. As I was researching this, I also coded up ShellSort, which is about as old as I am and which was claimed to be faster than QuickSort under some conditions.

I ran some comparisons, not meant in any way to be perfectly scientific2. I ran each algorithm on arrays of 50 to 50,000 values with six different characteristics:

  • already sorted
  • exactly reversed
  • mostly sorted (values typically within one or two places of target)
  • ordered blocks of 100 random values (the first 100 values are 0 + RAND(), then 100 + RAND() and so on)
  • completely random
  • random 10-character strings

First off, the 50-record results:


50 recs (ms) sorted near sorted blocks random strings reversed
Shell 0.06 0.06 0.11 0.10 0.24 0.09
Quick 0.13 0.13 0.16 0.13 0.29 0.14
Comb 0.09 0.10 0.17 0.17 0.33 0.13
Heap 0.34 0.33 0.32 0.32 0.52 0.28
Insertion 0.02 0.02 0.20 0.17 0.47 0.37
Selection 0.25 0.25 0.25 0.25 0.54 0.25
Cocktail 0.01 0.02 0.44 0.39 1.02 0.77
Bubble 0.01 0.02 0.50 0.45 1.12 0.78
Naive 0.22 0.23 0.50 0.46 1.06 0.77

I’d say it’s pretty clear that it doesn’t matter much what you use to sort a small array, just about anything will be fast enough (unless you’re going to perform that sort tens of thousands of times in a run). It’s also apparent that the “slow” algorithms are actually pretty good if our data is already reasonably well-ordered.

So far, so “so what?”

Let’s look at the opposite end of the spectrum: 50,000 values? Here, the Fast/Slow divide is apparent. First the “Slows” (two tests only, for reasons that should become apparent):


50K (ms) near sorted random
Bubble 31 522,216
Cocktail 30 449,696
Insertion 19 179,127
Naive 219,338 510,010
Selection 220,735 220,743

Yes, that’s hundreds of thousands of milliseconds. “Three or four minutes” to you and me. The “Fasts”, meanwhile:


50K (ms) sorted near sorted blocks random strings reversed
Shell 162 164 219 377 929 250
Quick 296 298 327 365 790 306
Comb 390 396 477 622 1,348 452
Heap 899 903 885 874 1,548 844

(I only ran two tests on the “Slows”, for fear of dozing off completely.)

Again, for data where values are near their final sorted positions there’s clear evidence that something like an Insertion Sort is much faster than any of the “sexier” options. Provided you know your data will actually meet that criterion, of course.

All that considered, I’m switching from CombSort to ShellSort as my default algorithm. While it loses out a little to QuickSort in the “random” test (probably most representative of my normal use case) it doesn’t carry with it the fear of stack overflow through extreme recursion, something that’s bitten me with QS in the past. Anyway, us old’uns have got to stick together.

As already mentioned, if you have small quantities of data or infrequent sort calls in your application, it really doesn’t make much difference which algorithm you use, although I’d still suggest being aware of the properties of several options and having a basic implementation to hand. Once you reach a point where sorting contributes materially to your application run time then you owe it to yourself and your users to make an intelligent selection.

Here’s my ShellSort implemenation in VB, transcoded fairly literally from the Wikipedia pseudo-code (optimisations welcome):


Public Sub ShellSort(inp)
' sorts supplied array in place in ascending order
Dim inc As Long, i As Long, j As Long
Dim temp ' don't know what's in the input array...
  If Not IsArray(inp) Then Exit Sub ' ...but it had better be an array
  inc = (UBound(inp) - LBound(inp) + 1) / 2 ' use Shell's originally-proposed gap sequence
  Do While inc > 0
    For i = inc To UBound(inp)
      temp = inp(i)
      j = i
      Do
        If j < inc Then Exit Do ' check these conditions separately, as VBA Ors don't short-circuit
        If inp(j - inc) <= temp Then Exit Do ' ... and this will fail when j < inc
        inp(j) = inp(j - inc)
        j = j - inc
      Loop
      inp(j) = temp
    Next
    inc = Round(CDbl(inc) / 2.2)
  Loop

End Sub


1 Not the absolute worst: there’s the catastrophic genius of BogoSort, to name but one, but let’s not go anywhere nearer to there.
2 Just so we understand each other, these are my figures for my code on one of my machines. YMMV. A lot.

Going Metric

(stackoverflow rep: 10,307, Project Euler 97/288 complete)

(in which a single comment substitutes for a butterfly’s wing, metaphorically speaking)

A week or so back , as part of the unfocused rant1 that stemmed from exposure to a particularly ghastly Excel travesty.

Since it struck a chord with at least one person2 and I’d been meaning for months to write something more about it, I’m going to ramble about my feeble (but still surprisingly useful) approach to VBA code metrics.

The original idea came from three contemporaneous events several years: a large and nasty workbook to be heavily modified, some reading about McCabe’s cyclomatic complexity metric and coming across a little VB utility named Oh No!

Oh No!

This utility scanned a VB project and displayed a chart that attempted to show you where possible problem areas might lie by using size and complexity of routines on its X and Y axes (Function, Sub, Property etc). Size was number on non-blank, non-comment lines, while complexity was (from the Help file):

The complexity is measured as the total number of structure statements in a procedure. These statements are: Do, For, If, ElseIf, Select, While and With.

It looked like this (pointed in this instance at its own source code):

The third dimension, “blob size”, was related to the number of parameters in the routine. All in all, it’s a nicely-done, well thought-out little app.

For my uses it had its drawbacks, of course. To start with, it wanted a .vbp (VB project) file for input, something I didn’t have in my Excel/VBA project. For another, I didn’t seem to have the sort of brain that could easily extract useful information out of the graphical display. The code didn’t like continued lines, something I use a lot, and I felt that size and parameter count were contributors to complexity: we didn’t really need three dimensions.

Most important of all, writing my own app looked like a fun part-time project and I’m a complete sucker for those.

I should confess now that a cyclomatic complexity calculation proved to be beyond me when I developed this utility in 2002. It’s probably still beyond me now, but back then the reference I had was the original paper and gaining the necessary understanding from that would in all likelihood have taken all the fun out of the whole thing. So I fudged it. Sue me.

Approaching the problem

The VBComponents of the target workbook are scanned and a collection of modules is constructed, each of which contains a separate object for each procedure2 it contains. As procedures are constructed from lines, they attempt to classify what they’re being presented and tally up the incidences of things worth counting:

Public Enum eCounterType
    ecComments = 1
    ecDeclares = 2
    ecElseIfs = 3
    ecGotos = 4
    ecHighestNestLevel = 5
    ecIifs = 6
    ecIfs = 7
    ecLines = 8
    ecLongLines = 9
    ecLoops = 10
    ecMultiDimCount = 11
    ecOneCharVars = 12
    ecParameters = 13
    ecSelects = 14
    ecWiths = 15
End Enum

You may have spotted that the deepest nesting is also recorded, although it’s not exactly a “counter” – perhaps I should have used “scoreCard” or something more expressive.

Each procedure gets a complexity “score”, which ended up being calculated, after much tweaking, using a tariff something like this:

Pts Awarded for
1.0 every “nesting” command (If, Do, For, Select, ElseIf, etc)
0.1 each line of executable code after the first, rounded down
0.1 each declared variableafter the first, rounded down
1.0 each line that contains multiple Dims
0.5 each parameter after the first
2.0 each If (more than loops because of the arbitrarily complex branching)
0.2 each long line (defined as 60 chars or more, after concatenating continued lines)
1.0 each Go To
special the square of the deepest nesting level encountered

.
It’s not supposed to be exact and it doesn’t need to be: the purpose is to give a quick sanity check and, particularly when encountering new code for the first time, an idea of where problem areas may lie.

There are deficiencies: I don’t think the code handles one-line If/End Ifs correctly – I believe it incorrectly increments the nesting level, which leads to a higher score. Since I really don’t like one-line Ifs, that’s just going to increase the chance that I see them, so I was happy to leave it like that.

If you look at the output, there are other metrics: information on the whole project, module-level summaries (if you see 300 global variables you might consider that worth investigating, for example).

There’s no penalty for “Static” declarations, probably because I hadn’t stumbled across any code that thought it was a good idea. I’d put a fairly hefty tariff on that: it’s a major obstacle to the unravelling process.

Another “drawback”: there’s no real parser – a better approach might have been to write one and utilise an AST (the very useful roodi library for Ruby uses one and can report a “proper” cyclomatic complexity metric as a result). Also, I didn’t use regular expressions – I probably hadn’t encountered them (or was still frightened) at that time: it would have made some of the line-parsing a good deal more terse.

I called it “LOCutus” – “LOC” for “lines of code”, the balance presumably indicating that the Borg were on my mind at the time. For better or worse, I’ve added it to the xlVBADevTools download (in a ZIP file with xlUnit). It’s an Excel XP file – I don’t think I’ve ever tested on XL2007 or higher, since I’ve yet to work at a site where it was used and prefer not to look at large crappy Excel apps without being paid for it…

If you download it, you’ll see that the report is showing the results of a scan on the app’s own code (shown as a HTML table below, so formatting may vary slightly):

VBProc (Module) Globals Members Procs
0 7 21
Procedure Lines Max nest Comments Declares Ifs Loops Long lines Go Tos OCV X
Load 8 2 0 3 1 1 0 0 0 9
ProcessLine 20 1 0 1 5 0 0 0 0 12
CheckForOneCharVars 7 2 0 1 1 1 0 0 0 8
CountParameters 14 3 0 4 3 1 0 0 0 17

.
The “worst” proc scores 17, largely due to its 3-level nesting. If I were to have a need to change it, I’d consider breaking out the inner loop (or taking the outermost If and refactoring into a guard clause). Seventeen isn’t generally high enough to make me concerned though.

That’s the most “formal” part of the diagnostic “toolkit” that I apply to strange new Excel spreadsheets – anyone out there have something else we could look at?

xlVBADevTool download


1 Aren’t they all? You should hear me around the office. Probably not, actually.
2 OK, exactly one.
3 These days I favour “routine” as a generic term for Function/Sub/Property or whatever. Times change.

It’s An Ill Wind

I was pontificating recently about the horror of tightly-coupled worksheet calculations and macro code in Excel and the mess that results from carelessly mashed-together programming paradigms.

By a horrible coincidence, a few smug weeks later I suddenly found myself deep in the horror again. This time, I had to contend with two versions, needing to set up and run several tens of thousands of real-world cases through “before” and “after” scenarios.

I soon started to turn up nuggets of wonderment. Here are a couple of gems from the worksheets:

=IF(CPPI=TRUE,TRUE,FALSE)
=MAX(1,DATE(YEAR(C4),MONTH(C4),DAY(C4))-DATE(YEAR(C6),MONTH(C6),DAY(C6)))

“Why”, I asked myself, “don’t these monkeys think?” (When someone asked me which monkeys I was referring to, I realised that my inner monologue was malfunctioning, but that’s not important right now.)

Obviously they were thinking, apart from anything else the whole thing does seem to work, inasmuch as a set of inputs produce a set of plausible-looking outputs. I’d assert, however, that an experienced, competent developer has a continuous, higher-order review process running, asking questions like “does that look right?”, “could that be written better?”, “is that clear?”, “could someone else understand that?” It would appear that the individuals concerned in this case had no such process running.

I have a little tool that I bring out when faced with an unknown Excel VBA application. It scans the code and delives a report that gives me an idea of the degree of caution I should exercise, calculating for each routine an “X” number, being some arcane function of code lines, mesting level, numbers of ifs, loops, one-character variables (seldom straightforward to rename, even if they’re called “q” or “z”). A score of 10 or above indicates a possible problem area. Over 50 and we’re definitely in bandit country. Here’s the report for the top-scoring routine in the horror workbook (names changed to conceal the guilty):

422 is by a factor of about 3 the highest-scoring routine I’ve ever encountered. (By contrast, and perhaps it’s an unfair comparison, the top score in xlunit is 24.) That 1044 lines of code doesn’t include comments, blank lines, variable declarations or similar. Even better, it’s declared Static, something so appallingly dangerous (to me, at least – I get nauseous when I see even a Static variable) that I had to change the analyser because it had never seen it before. “Option Explicit” is honoured more in the breach than the observance and there are a mere (ha!) 156 global variables.

I was flicking through my cherished (and slightly stained) copy of The Pragmatic Programmer last night and I’ll take as my text the section “What Make a Pragmatic Programmer?” on pages xiii and xix of the Preface. Summarising the characteristics and comparing them with the authors of the unholy mess described, I get:

Early adopter/fast adapter: given that the abomination I’m working with is the culmination of more than two years’ work and is the fourth or fifth (and probably “best”) version, FAIL.

Inquisitive: I see no sign of any curiosity, not even “why do we suck?” FAIL

Realistic: I’m (generously) prepared to specify that this one may not really apply here. (Big of me.)

Jack of all trades: and masters of none? These guys didn’t make it to Jack of one trade. FAIL.

Critical Thinker: referring to the inner monologue again, there’s no evidence to suggest that anything like the questions listed above were ever applied here. And plenty of evidence to the contrary. BIG RED FAIL.

The tragedy here is that the monkeys described are unlikely to realise exactly how snivellingly far from acceptable (let’s not waste “best” here) practice they are, as the Dunning-Kruger effect tells us is likely, so they’re not going to change in any useful way. Failing to see that the twin demons of high coupling and low (non-existent, if we’re honest) cohesion are rampaging through their code leaves me wondering where we’d start.

Depressingly, these guys aren’t unusual, they’re not even the worst. The type is perhaps a little more common in, say, the Excel world, where the tools to develop staggering incompetence are so widely available, but they’re everywhere else – the invisible 90% of the iceberg.

On the plus side, even moderately competent technicians who are prepared to don the elbow-length protective gloves and dive in to such messes will be able to look forward with some confidence to a lifetime in which there will be a paycheck for as long as they can stand to pick it up.

Buddy, Can You Paradigm?

(stackoverflow rep: 8998, Project Euler 89/273 complete

(or: “If Paradigm was half as nice”)

I started reading Coders At Work recently. In the style of a few predecessors, the book comprises transcribed (and presumably edited) interviews with fifteen programmers who have made significant contributions of various kinds. It’s a weighty tome, possibly a little overweight if I’m honest, but interesting nonetheless; it makes a change from the kind of technical tome I usually cart around on the daily commute.

Circling a little closer to, and nodding vaguely in the general direction of the point, interviewee number seven, reached just this morning, is Simon Peyton Jones, who was one of the progenitors of Haskell, one of, if not the first name that springs to mind when one hears “functional programming“. For the last decade, Jones has been a researcher at Microsoft Research in Cambridge, which suggests that he would have had at least one finger in the F# pie… There’s a DotNetRocks show number 310 features an interview with the man, by the way. I’m enviously disturbed that despite our very similar ages, he appears considerably less follicly-challenged than me, although there could be some comb-over action going on there.

In the taxonomy of programming paradigms (does that make sense?) we have Declarative and Imperative. They are opposites. In the Declarative fold, we get stuff like SQL, XSLT, all the functional languages and, I submit Excel worksheets. Actually it’s not just me: that Wikipedia link places spreadsheet cell-based programming within the Dataflow subcategory of Functional. Imperative programming, on the other hand, includes the panoply of more “traditional” languages, including good old Visual Basic and its slightly less functional sibling VBA.

Thinking about Excel and VBA developement, I’d say there are several fairly distinct uses for VBA code (usually, but not exclusively: we can write add-ins or perform automation in plenty of other ways) which include:

  • New worksheet functions;
  • Interface extensions (menus, toolbars, ribbons etc);
  • Helpers (or wizards);
  • Control and simulation

…of which I write a lot of the latter two.

(You know how some comedians talk about a string of seemingly unrelated things and then cleverly tie them all together at the end? Well, fingers crossed, stay tuned)

A while back, I was handed a workbook, developed by an external consultancy, that modelled the 20-year evolution of a financial product. We wanted to run it across a significant number of stochastic market data simulations to get a picture of the distribution of possible outcomes. About 30,000 such simulations (and about 16GB of simulated market data) would be sufficient, we thought. The workbook took about 20 minutes to perform one such calculation. We didn’t have a year.

Fortunately, there were any number of classic optimisation failures to rectify: massive VBA/worksheet interaction at the cell level (changed to use large arrays), screen update enabled at all times, lots of less painful VBA-specific stuff, Application.Calculation = xlCalculationAutomatic, you know the drill. Easy stuff. Within a day it was taking about a minute and I could look at having the whole analysis done in under a week (I have a four-CPU machine).

If it all seems straightforward, it wasn’t. I’d wrapped the workbook in a modified (hacked-up from xlUnit) test jacket to check that my changes weren’t affecting the results, pretty much a necessity for any refactoring work, and one change kept breaking. Every time I tried to switch off automatic calculation, triggering it only when I thought it was needed, the final results came out different. I finally realised that the VBA and worksheet were extremely highly-coupled – calculations were occurring as the code pushed values through: change a value, auto-calc, change another value, auto-calc again and so on. Eek. It took almost a week to unravel. Of course, since I actually enjoy doing this kind of work for the most part, it wasn’t a complete disaster, but I could have lived without the time pressure…

As a finance worker (perversely, I’m increasingly inclined to describe myself as a Banker1 these days) I have cause, from time to time, to access The Bloomberg from Excel. At least one of the functions in the (extensive and powerful, let’s be fair) API is a real functional-imperative mess. Here’s one with which I’m not at all happy:

=BDH("MXWO", "PX_LAST", "01/06/2007", "28/12/2009", "Dir=V", "Dts=S", "Sort=A", "Quote=C", "QtTyp=Y", "Days=T", "Per=cd", "DtFmt=D", "cols=2;rows=672", "FX=EUR")

Yes, there are a lot of arguments2, but most are at least not positional, which shows they’re trying. This particular example asks for the last daily quote of the MSCI World Index, in Euro, between the dates given. Even better, I guess, there’s a wizard to generate it all for you. But here’s the nasty bit: the call fills as many rows – without warning about overwrites – as it needs. Oh, and that "cols=2;rows=672" parameter? It’s rewritten if you make any change that would affect the number of cells output. This is not easy to accomplish. Try duplicating the effect in VBA, go on, I double-dare you. I think there’s a separate server process at work here, able to make asynchronous (but timely) modifications to user worksheets.

Yikes. And ugh.

To me, there’s something broken here. They’re using am apparently functional programming model but allowing the code to affect the state of the worksheet in (from the user’s perspective at least) a rather arbitrary (or at least, unpredictable) way. We’re not even getting an array formula here – the function just causes chunks of worksheet to be filled with data. In point of fact, there looks to have been some serious programming work here – I don’t think in the normal course of events that calling a function is allowed to cause areas outside the calling range to be modified. I rather suspect an external process is being asked to pump values into the sheet through some devious means like (but let’s hope it isn’t) DDE.

(This is where I try to pull it all together. A little encouragement wouldn’t go amiss here…)

The fact that we have both paradigms represented as first-class components of a single development platform may be a large part of the reason for Excel’s dominance in the marketplace. But the interface between declarative and imperative code needs to be carefully managed to avoid the kind of unnecessary side-effects that I spent so long untangling.

The messed-up simulation I whined about above failed to recognise that the worksheets should have been, in effect, a great big, super-complex function, which the code used by setting up the input conditions, firing off a Calculate and then reading the output values. Instead, it tried to implement a tightly-coupled declarative-imperative abomination, with unfortunate results.

Let’s try not to repeat that mistake, for the sake of the children.

I’ve been (and remain) grumpy – thank you and good night.

You've been a lovely audience


1 Wanna make something of it?

2 Many Reuters functions do something similar, but push all the key-value pairs into a long string.

A Long Time Dead

(stackoverflow rep: 6602, Project Euler 78/252 complete)

I realised just now that Visual Basic1 went from 1.0 to 6.02 in about 7 years and stopped progressing 11 years ago. Even so, if I needed to do some COM automation work in an EXE, it’s still the tool I’d reach for, assuming I could locate my old Visual Studio 98 CD-ROMS, that is.

So by now VB’s pretty much been end-of-life longer than it’s been, er, life. Golly. It really is time Microsoft came up with a viable alternative. What I can figure out about VSTA doesn’t fill me with excitement, that’s for sure. VSTO, for the power-user at least (emphasis on “user”) still looks even worse. Neither offers, as far as I can see, any route for legacy VBA code, which is, I suppose, why VBA is still expected to be around for the foreseeable future.

Still, for the little it’s probably worth, we now have Make Office Better, within which we can rant about such importances. The buzz following its appearance seems to suggest that Microsoft still have some way to go. I am looking forward to sparklines in Excel 2010 though…

Mmmmm, sparklines...

Mmmmm, sparklines...

1 (the “classic” one, not the impossibly confusing – to a still-active VBA user)
2 did we ever get a point release? I can’t remember any, although some of the ActiveX stuff did get service-packed.

The Glove Puppet Programmer

While I prevaricate over the four (!) drafts sitting at various stages of incompletion in my “Posts” view, I’ll take lunchtime out to reminisce about a non-contemporaneous* colleague and his unique status as probably the most expensive programmer I’ve ever had to deal with.

This particular individual, who was either French or French Canadian, had the singular identifying characteristic of a name that was a literal translation of a stuffed children’s TV character from my childhood. This in itself should have been a warning.basil_brush He had created the Credit Risk reporting system that a team of ten expensive contractors, myself including, were labouring feverishly to replace. On his own. As a series of Excel spreadsheets. With extensive VBA macros, coming from no discernible programming background. If you’re wincing, I’m not surprised.

Why am I asserting he was peerless in the cost stakes? Because the application he’d built actually worked. Badly, slowly, requiring regular manual intervention, occasionally obviously inaccurate and more frequently rather less so, but it worked. And we were trying to reverse-engineer it**, distinguish between the right and wrong parts and deliver something that was everything the old “system” was not. Well, almost: we did need our system to work.

This is one of the gems I discovered in the VBA:

    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 0).value = bv_sSTRATEGY
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 1).value = bv_sTRADENAME
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 2).value = bv_sTRADESTATUS
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 3).value = bv_sStructType
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 4).value = bv_sSTRUCTMODEL
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 5).value = bv_sTRANCHENAME
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 6).value = bv_iTRANCHENUMBER
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 7).value = bv_lTRANCHENOTIONAL

    ... are you getting the picture? We continue incrementing the offset for a while, until:

    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 29).value = bv_sCOMMENTDEFAULT
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 30).value = bv_sNEWTRANSID
    br_OutputWB.Worksheets(bv_sOutPutWSName).Range("A1").offset(br_lCount + 1, 31).value = bv_sCOMMENT

In case you were wondering, the “bv_” prefix signifies “By Value.” And yes, that does mean that each of those values appeared as parameter to the routine. I counted 32 in total. That’s thirty-two input parameters. You really should be wincing by now.

gogglesSeveral other routines also had the same (or 99% the same) code blocks. Our friend the cut-and-paste programmer must have really earned his money that day…

Oh yes, you should see an example of how our chum called his parameter-heavy subroutines. Consider acquiring welding goggles before looking at this, by the way.

Call SS_BuildMTMByTrade(br_OutputWB:=OutputWB, br_lCount:=lMMcount_SS, bv_dbFXrate:=GetFXRate(vMM_FXRates, RgCursor.offset(lOffset, Range("TRANCHECCY").Column - 1).value), _
bv_sSourceFile:=m_sSS_MM_Source & IIf(bv_sTradeNamePrefix <> "", bv_sTradeNamePrefix & RgCursor.offset(lOffset, Range("TRADENAME").Column - 1).value, RgCursor.offset(lOffset, Range("TRADENAME").Column - 1).value) & "_" & "MM" & "_" & RgCursor.offset(lOffset, Range("MM_CSB").Column - 1).value & "_" & RgCursor.offset(lOffset, Range("MM_R").Column - 1).value & "_" & IIf(RgCursor.offset(lOffset, Range("MM_PW").Column - 1).value, "T", "F") & "_" & IIf(RgCursor.offset(lOffset, Range("MM_SCS").Column - 1).value, "T", "F") & "_" & Format(m_dSS_MM_Asof, "yyyymmdd") & ".xls", bv_dbParticipationFactor:=RgCursor.offset(lOffset, Range("DEALNOTIONAL").Column - 1).value / RgCursor.offset(lOffset, Range("TRANCHENOTIONAL").Column - 1).value, bv_iShift:=RgCursor.offset(lOffset, Range("MM_R").Column - 1).value, bv_bIgnoreFirstCprty:=CBool(RgCursor.offset(lOffset, Range("IGNORECPTY").Column - 1).value), _
bv_dbSpread:=RgCursor.offset(lOffset, Range("DEALSPREADBP").Column - 1).value, _
bv_bUsePVFormula:=RgCursor.offset(lOffset, Range("USEPVFORMULA").Column - 1).value, _
bv_sSTRATEGY:=RgCursor.offset(lOffset, Range("STRATEGY").Column - 1).value, _
bv_sTRADENAME:=IIf(bv_sTradeNamePrefix <> "", bv_sTradeNamePrefix & RgCursor.offset(lOffset, Range("TRADENAME").Column - 1).value, RgCursor.offset(lOffset, Range("TRADENAME").Column - 1).value), _
bv_sTRADESTATUS:=RgCursor.offset(lOffset, Range("TRADESTATUS").Column - 1).value, _
bv_sStructType:=RgCursor.offset(lOffset, Range("STRUCTTYPE").Column - 1).value, bv_sSTRUCTMODEL:=RgCursor.offset(lOffset, Range("STRUCTMODEL").Column - 1).value, _
bv_sTRANCHENAME:=RgCursor.offset(lOffset, Range("TRANCHENAME").Column - 1).value, bv_iTRANCHENUMBER:=RgCursor.offset(lOffset, Range("TRANCHENUMBER").Column - 1).value, bv_lTRANCHENOTIONAL:=RgCursor.offset(lOffset, Range("TRANCHENOTIONAL").Column - 1).value, bv_sTRANCHESUBORDINATION:=RgCursor.offset(lOffset, Range("TRANCHESUBORDINATION").Column - 1).value, bv_sTRANCHECCY:=RgCursor.offset(lOffset, Range("TRANCHECCY").Column - 1).value, bv_dtTRANCHEMATURITY:=RgCursor.offset(lOffset, Range("TRANCHEMATURITY").Column - 1).value, _
bv_sBUYSELL:=RgCursor.offset(lOffset, Range("BUYSELL").Column - 1).value, _
bv_lDEALNOTIONAL:=RgCursor.offset(lOffset, Range("DEALNOTIONAL").Column - 1).value, _
bv_iDEALSPREADBP:=RgCursor.offset(lOffset, Range("DEALSPREADBP").Column - 1).value, _
bv_sUPFRONTFEE:=RgCursor.offset(lOffset, Range("UPFRONTFEE").Column - 1).value, _
bv_sCOUNTERPARTY:=RgCursor.offset(lOffset, Range("COUNTERPARTY").Column - 1).value, _
bv_dtTRADEDATE:=RgCursor.offset(lOffset, Range("TRADEDATE").Column - 1).value, _
bv_dtSETTLMTDATE:=RgCursor.offset(lOffset, Range("SETTLMTDATE").Column - 1).value, _
bv_sREPACKVEHICLE:=RgCursor.offset(lOffset, Range("REPACKVEHICLE").Column - 1).value, _
bv_sCOMMENT:=RgCursor.offset(lOffset, Range("COMMENT").Column - 1).value, _
bv_lADRNOTIONAL:=RgCursor.offset(lOffset, Range("ADRN").Column - 1).value, _
bv_sCOMMENTDEFAULT:=RgCursor.offset(lOffset, Range("COMMENTDEFAULT").Column - 1).value, _
bv_sNEWTRANSID:=RgCursor.offset(lOffset, Range("NEWTRANSID").Column - 1).value, _
bv_sOutPutWSName:=MMTEMPSHEETNAME, _
bv_BookName:=RgCursor.offset(lOffset, Range("BOOKNAME").Column - 1).value, bv_UniqueID:=RgCursor.offset(lOffset, Range("UNIQUEID").Column - 1).value, bv_KMVCorr:=RgCursor.offset(lOffset, Range("KMVCORR").Column - 1).value, bv_MarketCorr:=RgCursor.offset(lOffset, Range("MARKETCORR").Column - 1).value, bv_RunFlag:=RgCursor.offset(lOffset, Range("RUNLISTFLAG").Column - 1).value, bv_DataSource:=RgCursor.offset(lOffset, Range("SOURCETYPE").Column - 1).value)

The second line is about 900 characters long. Nine. Hundred. If you’re starting to want to make it go away, perhaps by clawing out your eyeballs, I don’t blame you.

Finally, for dessert, if you will, or maybe like one of those hot steamy towels presented at the end of some Indian meals, here’s our friend’s take on copying an array:

Public Sub CopyArray(ByVal bv_FromArray As Variant, ByRef br_ToArray() As Variant)
    Dim l As Long
    Dim i As Integer

    For l = LBound(bv_FromArray, 2) To UBound(bv_FromArray, 2)
        ReDim Preserve br_ToArray(1, l)
        For i = LBound(bv_FromArray, 1) To UBound(bv_FromArray, 1)
        Debug.Print bv_FromArray(i, l)

            br_ToArray(i, l) = bv_FromArray(i, l)
        Next
    Next

End Sub

From the blank line following the “Debug.Print” line I surmise that some operation may, in the distant past, have been occurring that made the routine more than a criminal waste of electricity. I hope the use of lower-case “L” is a soothing balm to your eyes, as is the continuous “ReDim Preserve” of the outer loop…

I don’t know the final cost of the project that replaced this, I elected not to renew my contract after nine months. I’d guess it didn’t exceed $10 million, not counting the possible losses incurred from trading on inaccurate information in the several years the “system” was live.

I’d call that dangerous.


*we worked at the same place but not at the same time, if that makes any sense

** specs? Don’t be ridiculous.

Whither (Wither?) VBA?

Considering the origins of the various Office components (and a light bulb may be flickering dimly in the minds of those who can remember back that far) we’ve come a long way in the application automation stakes. Can you remember Access Basic ? WordBasic? Excel macros pre-VBA? Did early versions of Outlook have macros?

Office application macro capabilities have come a long way but they’ve been pretty much stuck at the last version of the VB6 runtime. That’s about a decade with no significant change. In that time Microsoft have hammered their way up to version 3.5 of .NET, but with only half-arsed (my opinion) gestures made towards improving/extending/renewing the internal automation aspects of Office.

I say “half-arsed”, which, I dunno, might be a little harsh, but the whole VSTO thing just seems like a thin wrapper on COM Interop, which is itself a wrapper to permit communication between shiny new .NET code and skanky old legacy stuff. Why do I need to use VSTO at all? If I need complex, compiled high-performance extensions then I’m probably better off getting one of my C++ literate colleagues to write a “proper” XLL add-in that won’t have to deal with COM at all. If I don’t need high-performance then any scripting language that can talk to COM will do the job. Heck, I can use Ruby (and do) – David Mullett has a whole blog on the topic of Windows automation with Ruby.

Microsoft want to get away from VBA, I think that’s clear. They’re never, never, never going to get the current host of non-technical VBA users to switch to VSTO. Forget it, it’s not going to happen. Hell, I don’t want to have to use VSTO and I’m one who should benefit from raising the cost of entry to macro programming. Do MS want to get away from COM? Maybe. They wanted to get away from DDE too, but it’s still lurking somewhere not too deep inside Windows.

How to make an old programmer slightly less grumpy

How to make an old programmer slightly less grumpy

But here we have the Dynamic Language Runtime, which sits on top of the .NET CLR and allows fun things such as IronPython, IronRuby and others. Snappy performance, ability to use .NET libraries, interoperability between languages, sounds like fun. According to Wikipedia, the plan is to move VB to run on it. Now there’s a thought: why shouldn’t Excel be rebuilt in sexy modern managed code, with VBA ported to the DLR and the old COM interfaces reduced to a shim to keep backwards compatibility? Then we’d have macro programming where it should be, in the application, with the billions of lines of legacy code still runnable, and I’d be able to hit Alt-F11 and work in Ruby.

Seems like a win-win scenario to me.