Getting CORS to work on C# WebAPI

There are many guides for this, and it should be a simple thing, but there is one possible complication: duplicate header entries.

So I’ll make this quick. Refer to for more information.

“Where can I get System.Web.Http.Cors?”


“How do I enable CORS?”

Add to WebApiConfig.Register():


But now, depending on your other settings, you might get a duplicate Access-Control-Allow-Origin header. This is how you remove it:

Add to WebApiApplication in Global.asax.cs:


That’s it. Should work now.



I’ve become inspired by the success of the indie-game Minecraft, and naturally I had to give coding a game myself a try. Given the fact that I have very little spare time between work, family and other duties, it could take a while… but life will find a way. The name of the game is Kojvenepane (don’t complain, it was available!) and here’s the development blog:



Well… thought I’d post a status update about my freeware software projects. It’s a curious thing, I never seem to have time to finish them. Development of jeSokobanSolver 3 has been halted due to a memory bug (corruption, of unknown causes, I now think I know why but haven’t gotten around to fixing it yet). Nothing is being done on jeTMS, been doing some work on the new jeDebug but really need to put in a full week of coding to get it up and running. And today I find myself in the rare situation that I have a few spare hours and a burning desire to do some coding, so I will…. start a new project! 🙂 Going to get jeMachineManager up and running FAST.

jeMachineManager (previously jeServerManager) or jeMM for short, is Systems Management software that I spec’ed years ago… It keeps inventory of hardware and software, monitors uptime and health, and notifies the admin of any problems on any of the machines. There are lots of other SM software like System Center Configuration Manager from Microsoft or the open-source Nagios, but they are all bloated and complicated to setup and manage (no offense), requiring a bunch of intrusive pre-requisites, special user accounts and in some cases hogs too much resources.

jeMachineManager is a minimal light-weight solution, requiring only the installation of a small client (service) on the client machines. It communicates via HTTP with a cgi (or webservice) on the server. The cgi talks to a MS SQL database. There’s also an administration web interface and an admin client that handles notifications etc.

The client –> server HTTP communication means there’s very little to configure for the network access, and you’ll be able to add machines that are outside of the intranet, or in an another intranet.

In its most basic mode the client just polls the server with “I’m alive!”-messages, reporting ID and IP. That’s how I’ll be able to get it up and running quickly. All further functionality is provided with an addon-system yet to be designed.

The admin can group machines together in any way he wants, and he has access to the web interface that provides an overview of all the machines and their status, including a list of warnings… machines that are down, services that don’t reply, excessive cpu loads, disks that are near full, software that needs updating, security issues, failing fans etc etc.

The client-client does not need the server though, it can operate in a stand-alone mode where it only notifies, and advices, the user of any computer health concerns.

Starting development… hmm.. NOW!

No I didn’t, the spare time disappeared. Ok maybe thursday.

20200526: This project is permanently canceled.

FILEZ: The chassis has arrived

The chassis has arrived. It’s a Lian-Li PC-343B Modular Cube Case, a double-wide chassis with 18 x 5.25″ bays, how about that? It’s my first Lian-Li and I’m very much impressed. The outside is pleasing to my eye and the inside seems well designed. Remains to be seen how it works out when the rest of the components arrive. 🙂

On to some photos… (you can click the pictures to view them in full size)  :

On the back you have a number of “slots”/panels for fans, drive bays and/or PSU’s. It is shipped with a panel for one PSU, but you can buy additional panels for installation of a second or redundant PSU.

You can have three 120mm fans, or replace the two on the left side with drive bays (including an 80mm fan) containing three 3,5″ disks. If you don’t want the fans or anything you can buy cover panels to cover up the holes.

Can’t wait to get started building this thing!

Planning for my ZFS fileserver

I’m looking to consolidate my storage. As seen is my previous post machine setup and disk/net benchmarking 2009-04-16 my disks are spread out a lot between my machines. There are several issues that I would like to address and improve upon: speed, data security, backup, maintenance, locating data and scaling.

I’ve decided to build a dedicated fileserver. Its name shall be… hmm… FILEZ. Afterwards I’ll move my stuff around and somehow get rid of PLUMBUM or GATES.

This is what my intranet looks like at the moment:


(Yes, it’s an actual old-style drawing, on real paper!)

I’ll be running OpenSolaris and ZFS.  ZFS because it’s the most secure filesystem in existance, and also the fastest, most flexible and most scalable filesystem in existance. OpenSolaris because it’s the “mother” operating system of ZFS.

Here’s my chassis:


Can you dig it? CAN YOU DIG IT?  With 6 internal disks and 6 of those 2×5.25–>3×3.5 drive bay converters and two 3×5.25–>5×3.5 drive bay converters you could fit 34 (!!)  disks in there without having any loose disks dangling in their molex cables inside the chassi. But I won’t be doing that, me thinks overheating drives would be a problem. Anyway, this should take care of my storage needs for a looooooong time. The problem isn’t the space for disks but the SATA connections. ZFS isn’t a clustered filesystem, you need all the disks connected to the same machine. A standard server mobo has six SATA ports, with a extension card like SuperMicro’s AOC-SAT2-MV8 (for PCI-X = 1.5 GB/s) or the AOC-SASLP-MV8 (for PCI-E = 500 MB/s) you get 8 more SATA ports… if you have those slots on the mobo. If you’ve just got PCI slots there are cards with 4 SATA ports, but PCI is 133 MB/s (split between four disks) so the performance will suffer.

I’m looking at mobo’s from Intel and SuperMicro, must have one, preferably two, PCI-X slots and ECC memory and as many SATA ports as possible. The X7SBA is an option, S3210SHLX another (but where’s its page at I’ll get any cheap CPU that fits the mobo and 2 x 2GB of dual-channel RAM. I will need a really good PSU for the mobo with lots of SATA connectors, not sure which one yet, possibly one more PSU in there to manage the power for all the disks. Worst case scenario: have another PSU outside the case, ugh…

The disks.. now there’s where the real fun starts! 🙂

This is my little TODO-list, including a lot of moving files around… No need for you to read this, it’s just to assist me…

  • Must move data to free the three minor disks in CONAN and the 160GB in GATES
  • I will by re-install TORPED2 on the two (striped) 160GB disks
  • Move database (MySQL) from PLUMBUM to CONAN (or FILEZ (AMP)?)
  • Shutdown PLUMBUM for good
  • Swap mobo + CPU (but keep memory!) of PLUMBUM and GATES.
  • Will try to setup GATES to boot off FILEZ, over ethernet.
  • If that doesn’t work I will move GATES to CONAN’s 200GB disk.
  • I will make a ZFS pool using 7 x 250GB disks in a raidz config (ZFS’s equivalent of RAID-5). The disks will come from the old TORPED2 (4 x 250GB), CONAN (1 x 250GB), GATES (2 x 320GB).
  • Will now have 1.5TB of usable space in FILEZ
  • Move all data from CONAN to FILEZ
  • Try to make CONAN boot off FILEZ. If that doesn’t work to my satisfaction then setup a mirror config using the internal RAID (?) and two disks (which ones?)
  • Make a new raidz (RAID-5) using 7 x 500GB disks. The four from old CONAN, the one from PLUMBUM and the two from PLUMBUM’s backup. Add the raidz to the pool to give me a total of 4.5 TB continous diskspace. Yes, indeed.
  • We have a risky situation now. No backup. TORPED2 and GATES do not have redundant storage but that’s fine as they’re backuping daily to FILEZ. A disk breaks, no problem at all. But no backup of NAS or FILEZ? Scary. Even though I know ZFS is safe as heck, it’s still scary…
  • Move all data from NAS to FILEZ.
  • Make a new raidz using the four 1TB disks from NAS. That’s 3 TB of usable space. Add to the ZFS pool to make a total of 7.5 TB continous usable diskspace.
  • Make something of old PLUMBUM (which now has the mobo from GATES)… If I got GATES and CONAN to boot off FILEZ I should have some spare disks to put in it.
  • Addendum: One of the 250GB disks in TORPED2’s current RAID-5 keeps failing. Intel’s controller seems unable to identify which one, hopefully ZFS will identify it so I can remove it, making the first vdev 6 disks, not 7.
  • Addendum 2: Disk 3 in my NAS is failing the daily SMART test, replace it with new disk and use the old one for backup.
  • Get one more disk for the (JBOD) external backup enclosure. Attach to GATES and config backup of vital data from FILEZ.
  • Setup off-site backup (a deal with a friend is in progress)
  • Sell the old NAS 🙂

Something like that. Should work. Phew.

That’s 18 disks plus 2 mirrored ones for the root (on separate pool). 20 disks. Will need the six internal SATA ports and two 8-port PCI-X cards. Or maybe go with 18 disks and one PCI-X card and use my old the Adaptec 2410SA 4-port PCI-card for now?

Possibly I’ll run AMP (LAMP without the L) on FILEZ. Would take a lot of stress off CONAN. I will still need to run Apache on CONAN though, to handle all my CGI’s. It will take some time to transfer to FILEZ as I will have to start handling directory/file permissions… I imagine I’ll be chmod’ing for days…

I am running tests of OpenSolaris/ZFS right now, using GATES and two old 120GB-disks. I am having problems sharing folders using SMB/CIFS, it’s very annoying. Hopefully it will work out in the end, will post details here if/when it does. (Edit 090503: I just solved this problem!)

I need to check for bottlenecks:
– cables,  they’re all CAT 5e or better?
– routers/switches etc: all up to speed?
– disks: all disks have similar speeds?
– memory, cpu, mb: should be no problem there
– using more than one ethernet port, would it make a difference?

What a beast of a blog post. Took half the day including research. Posting it now, possibly making changes to it later.

Comments are disabled as usual. Will create a forum thread for discussions regarding this project.

Website design: Global companies vs visitor locality

Why do I, as a visitor to a company website, have to tell the website where I am located? Why do I have to decide which countrys website I need to visit? It’s just obnoxious and poor design.

Case in point: I went to to get drivers for my mothers scanner. They had a Drivers link on the first page which is commendable.

But then they display a MAP!!

They show five options on this map: Americas, Europe/Africa/Middle East (yes, because, it’s the same, right?), Japan, Asia, Oceania

I don’t know what they want with this map. For all I know they want me to tell them where they would find the best coffee. But I’ll play. I select “Americas” as I want the english version of the drivers.

Then they display MORE options, this time I have to select which part of the “Americas”: Canada, Latin America, Mexicana, Argentina, Brazil, Chile or the USA. When I select USA I am transported to a new website. And there thay have a new menu with a new Drivers link. I have already pressed Drivers for crying out loud!

I wouldn’t bother if it was just Canon, but I’ve seen so much of that kind of design lately.

Choosing a location, whether it’s the visitor’s location or the website location, should be OPTIONAL as far as possible. I don’t mind having the choice of language on every page, in fact I think it’s excellent. But it should not mean being transported to another website. It should just change the language on the website you’re on.


Large Hadron Collider, CERN, säkerhet

Jag ska inte här gå in på vad LHC är, du kan läsa mer om det på .

Mitt inlägg handlar om säkerhet och risker. Forskarna på CERN försäkrar att det är säkert. Men kan de verkligen göra det?

Kärnkraftverk är omgivna av rigorösa säkerhetsarrangemang. Trots det sker misstag. Skillnaden mot LHC är att ett misstag inte bara riskerar närområdet, en stad eller ett land… utan kanske hela världen.

CERN ger inte direkt intryck av att ta riskerna på allvar. På ett seminarium i CERN fick en fråga om svarta hål detta nedlåtande svar: “Now talk about fussing about nothing: first of all, a hole, black to boot, and microscopic on top of that! If tiny, weeny little holes are going to get a big grown-up man like you all scared, holy banana, what would a big white bump do to you?”.

De passade även på att rita upp ett litet svart hål så alla kunde se hur litet det är, samma demagogiska knep som supersträngforskaren Ulf Danielsson apade efter i den här artikeln i Ny Teknik: där han också konstaterar “Ett sådant minihål lever emellertid bara några hundra miljondels miljarddels miljarddelar av en sekund innan det försvinner i ett moln av Hawkingstrålning.” som om det vore fakta.

Kerstin Jon-And, professor i partikelfysik och styrelseordförande i Atlas, säger i Metro ( ) att “Jorden försvinner INTE i svart hål” vilket måste kännas betryggande för läsaren, och förklarar även hon “Den här typen av kollisioner uppstår hela tiden när kosmisk strålning träffar jordens atmosfär. Om vi har en fruktansvärd tur skulle vi kunna skapa vad vi kallar mikroskopiska svarta hål, men de skulle ändå förångas omedelbart. ”

Det ingen behagar nämna är att TEORIN om Hawking-strålning är starkt kritiserad och Stephen Hawking tror själv inte längre att den existerar. Varför används den som argument för att svarta hål är ofarliga?? Ja pöbeln måste väl lugnas på något sätt, eller hur? Annars får inte forskarna leka med sin fina blänkande collider….

Partikelfysikern David Milstead använder sig av samma argument i Aftonbladets artikel på

Och vidare, ang att detta sker naturligt när kosmisk strålning träffar jordens atmosfär… Om det är samma sak kan man fråga sig varför man bygger en maskin för 50 miljarder för att simulera något som sker kontinuerligt i vår atmosfär? (Jo, förvisso svårt att mäta effekterna uppe i atmosfären, men det är inte enda anledningen)

Det har 1) aldrig observerats eller bevisats att kosmisk strålning skapar svarta hål, och 2) om det sker så skjuts de genom jorden och vidare ut i rymden i nära ljusets hastighet. I LHC-fallet så föds det svarta hålet i vilo-läge, det stannar alltså kvar på plats vilket är en viktig skillnad. Så har inte CERN-vurmande professorer/fysiker bättre argument att komma med blir jag lite skraj….

Jaja det funkar ju fint att sälja in till trötta journalister som inte har tid / motivation att sätta sig in i problematiken.

I kritiska forskningsrapporter (ex Rainer Plaga’s rapport) som ligger till grund för stämningar mot CERN för att få stopp på projektet nämns flera åtgärder som kunde tagits för att förbättra säkerheten. Detta har inte åtgärdats. XXX fixa länkarna

Är CERN verkligen instansen som ska bedöma säkerheten? Inte en oberoende och global organisation som tex FN??

Svarta hål är nu inte det enda som LHC potentiellt kan skapa, det kan skapa sk “strangelets”, en slags materia som man tror existerar i neutron-stjärnor. På Metro Teknik kan man läsa “Även om man inte vet exakt vad LHC kommer att ge svar på är man säker på att den inte kommer att innebära jordens undergång. En säkerhetspanel med bland andra en nobelpristagare backar upp det påståendet.”

Wow, en Nobelpristagare!  Ja då så… inget slår ju auktoritetsargument…

Men… vänta nu… Fysikern och Nobelpris-vinnaren Frank Wilczek skrev runt 2000 i Scientific American att collidern skulle kunde producera strangelets. I det fallet “one might be concerned about an ‘ice-9’-type transition,” wherein all surrounding matter could be converted into strangelets and the world as we know it would vanish.”

Hoppsan. Nobelpristagare mot nobelpristagare, vem ska man tro på nu då?

En annan materia som skulle kunna skapas är bosenovas…  Den 28:e augusti stämdes CERN av en grupp fysiker, forskare och studenter huvudsakligen från Tyskland och Österrike, (aningen försent för att förhindra LHC-starten gissar jag) då de anser att LHC utgör en allvarlig risk för EU’s säkerhet. Risken för bosenovas är inte utredd. “Whether possible or not is unknown, no experiments having been done by CERN to rule out the possibility, nor any theoretical model studies.”


Men det finns annat som skulle kunna skapas också, inkl helt okända fysiska ämnen och fenomen.

Forskarnas försäkringar om säkerhet är relativt tomma ord, då de helt enkelt inte vet vad som kommer hända, då ingenting liknande tidigare har skett. Detta är de medvetna om själva och har medgett. De kan bara spekulera. Spekulera… och samtidigt garantera, underligt nog.

CERN-professorn Dugan O’Neil säger att det aldrig kommer bli möjligt att helt utesluta att “some very strange things” kan hända.

CERN spokesmodel Brian Cox säger “the LHC is certainly, by far, the biggest jump into the unknown.”

CERN-fysikern Alvaro De Rújula säger “science is what we do when we don’t know what we’re doing.”

Jag tycker detta rimmar illa med deras försäkringar om hur säkert det är.  Man brukar säga att extraordinära påståenden kräver extraordinära bevis.  Jag vill i detta fallet tillägga att extraordinära risker kräver extraordinära säkerhetsåtgärder.

Var finns säkerhetsåtgärderna?

Alltså.. jag vill avsluta med att säga att jag tror inte något av detta kommer hända. Troligen kommer LHC köra som tåget i alla år utan att något händer. Men då jag inte känner att riskerna är tillräckligt utredda, tas på tillräckligt allvar, att pågående stämningar inte är färdiga, att kritiska säkerhetsrapporter inte tas på allvar… är tillräckliga argument för att avvakta med LHC, i alla fall då eventuella konsekvenser av felbedömningar, misstag och direkta effekter av LHC-kollissionerna är så potentiellt katastrofala för allt liv på jorden…

O’Niel bedömer att risken att något av detta ska hända är “extremt osannolikt”. Ok, låt oss anta att du har rätt, att risken faktiskt är extremt osannolik.  Sorry men “extremt osannolik” är fortfarande inte bra nog… inte i det här sammanhanget.

Eventuella kommentarer kring detta inlägg kan skrivas i forumet i tråden:

Här är ett färskt (2008-09-12)  debattinlägg av Ulf Danielsson där han fortsätter att lovorda LHC och vifta bort de kritiska frågeställningarna som “ogrundade” :


Now here’s something interesting:

“C++0x is the planned new standard for the C++ programming language.”

Planned for release in 2009, the final name will be C++09.

This sounds good:

“Prefer introduction of new features through the standard library, rather than extending the core language;

“Prefer changes that can evolve the programming technique;”


“Attention to beginners is important, because they will always comprise the majority of computer programmers, and because many beginners do not intend to extend their knowledge of C++, limiting themselves to operate in the fields in which they are specialize.”

However, while reading though the draft, I can’t say that I notice this aforementioned attention… in some case, quite the opposite. I see many neat tricks that the experienced programmer will find good use for, but that the newbie will fail to understand how and when to use. Dealing with the basics of C++ is hard enough.

Well, I will return with a thorough review once I get around to it…