Hello Schependomlaan

As many of you in the BIM innovation community noticed, Stijn van Schaijk posted a huge amount of public available BIM data on github. Gigabytes of data including IFC data, BCF, pointclouds, schedules, log files, etc. are open available for R&D and educational purposes.

The message in to import Schependomlaan dataset

The GUI has a new feature to automatically import the whole dataset into your BIMserver for demo and testing purposes.

Schependomlaan after import

Recents weeks we’ve been using the dataset to perform some optimisations on the usability of and BIMserver. We’ve noticed that multiple aspect models didn’t perform as we expected and we updated the API of BIMserver to better facilitate this in the viewers.

Testing of the new setup with IfcOpenShell, and BIMserver show remarkable performances. Loading the whole dataset with all 49 aspect models over a home internet connection fully loads within less than half a minute.

Testing on localhost shows the complete dataset in around 10 seconds which proves the limitation is in the internet connection.

We are excited about this and are looking even more forward to the new version of BIM Surfer. The BIM Surfer V2 is well on its way with a much leaner and stable API, an MIT license and much more features for improved usability.


We are stoked to see more and more open source tools of such high quality complementing each other. The activity on github, the release of the Schependomlaan and the growing use of BIMserver prove that BIM users are still seeking innovation. We are happy to contribute to that 🙂

Metrics on the Elasstic project

The EU supported researchproject Elasstic has been using BIMserver to create a new BIM concept in the security domain. During the project BIMserver was used to store IFC data and trigger remote services (now called ‘BIM Bots’). We will go into detail later about the Elasstic BIM concept (that also involved Multi Criteria Analyses for evaluation of simulations based on BIM data). This blogpost is about the metrics of the model and the used BIMserver and GUI plugins.

The BIM in Elasstic was split into 5 different sections. The 5 sections together formed the whole building. They are named ‘ Ribbon 0’ to ‘Ribbon 4’. Within the 5 sections discipline models where created for the disciplines Architecture, Construction and MEP. There was no need for fusion/merging of the models so these plugins where not used.


In the revision that was checked in the latest, the total number of BIM objects that were created was almost 20 million (19.188.069). To give a small indication of the type of objects:

  • 1151 beams were modelled;
  • 1222 columns;
  • 2041 doors;
  • 764 slabs;
  • 1170 spaces;
  • 332 stairs;
  • 2526 windows;

This resulted in:

  • 3.285.133 cartisian points;
  • 5.019.418 ifc faces;
  • The most used IFC object was IfcPolyLoop with 5.019.433 occurrences.

The IfcClassification object was only used 15 times.



The geometry of this latest revision is quite detailed. The metrics about the geometric triangles:


These numbers are after the boolean operations performed by IfcOpenShell.


In total 35 revisions are checked in by 7 different users.

The process of BIMserver use was evaluated by using process mining technology on the event log of BIMserver. This resulted in the following diagram (credits to Stijn van Schaijk):


The project started with a BIMserver 1.2, but during the course of the project was migrated to version 1.3. The GUI plugin was, the IfcOpenShell plugin was used to generate geometry.

The setup runs on a dedicated server with 56Gb RAM memory, 4 CPUs and quite a slow disk.

All objects were able to show in On some operations (selecting) the performance was not user friendly, but the most common usage has acceptable performance.

We had multiple suggestions for optimization of BIMserver which will be evaluated in the 1.4 development.

An impression of the latest revisions of the models:


Some metrics about ‘large’ models

We are getting a lot of questions about the ability of BIMserver to handle large models. Most of the time our answer is that you have to allocate more heap memory. We’ve never seen a model that cannot be handled by BIMserver because of its size.

Recently some users asked us to perform a quick research on how BIMserver handles large models. We’ve received several different IFC models varying in size between 500Mb and 3Gb. Some of them so complex that they don’t open in any known IFC viewer.
This gave us the opportunity to measure the performance of BIMserver and get some metrics. We are happy to share these with you in this blog.

We’ve used a 122Gb RAM cloudserver from Amazon with 16 cores for this test.
We decided to do the test in a sequence, meaning it is not a ‘checkin one model and stop’, but a ‘ checkin all models one after another and pray’.

We are using several different IFC files for this:

  1. Model 1 is 1.5Gb of IFC step file;
  2. Model 2 is 474MB of IFC step file;
  3. Model 3 is 1.9GB IFC step file;
  4. Model 4 is the 3.85Gb IFC step file;
  5. Model 5 is a 991Mb IFC step file;
  6. Model 6 is a 685Mb IFC step file.

We started the test at November 7th at 01:28:02. The test ended at 05:42:09. The time of the test was of course influenced by the upload speed of the internet connection. What is interesting is the database and memory usage of the server.

The log:

07-11-2014 01:28:02
Database size: 350.91 KB (359330)
Used: 2.12 GB, Free: 1.29 GB, Max: 200.00 GB, Total: 3.42 GB
Done checking in Model 1
Creating project model 2
07-11-2014 02:10:50
Database size: 15.02 GB (16131952680)
Used: 53.13 GB, Free: 23.70 GB, Max: 200.00 GB, Total: 76.83 GB
Done checking in model 2
Creating project model 3
07-11-2014 02:23:54
Database size: 7.82 GB (8391386817)
Used: 38.05 GB, Free: 39.22 GB, Max: 200.00 GB, Total: 77.27 GB
Done checking in model 3
Creating project model 4
07-11-2014 03:20:47
Database size: 22.25 GB (23893972378)
Used: 68.99 GB, Free: 52.90 GB, Max: 200.00 GB, Total: 121.90 GB
Done checking in model 4
Creating project model 5
07-11-2014 05:16:25
Database size: 42.89 GB (46053250623)
Used: 121.55 GB, Free: 29.69 GB, Max: 200.00 GB, Total: 151.23 GB
Done checking in model 5
Creating project model 6
07-11-2014 05:42:09
Database size: 32.35 GB (34733942862)
Used: 67.11 GB, Free: 45.70 GB, Max: 200.00 GB, Total: 112.81 GB
Done checking in model 6

About 2 hours later, the database is 30GB big, CPU back to normal and 70GB of heap memory used. 56GB of that is probably cached database data. We’ve explained many times why the storage of BIMserver is higher than the original IFC file. We believe this is needed to profit from the benefits of using separate objects in a database instead of files. Of course all models were intact after download from the server.

What this test shows is that BIMserver is perfectly capable of handling large models, and even tolerates the checkin of a model while another checkin is still not fully processed by the database. This proves that stability of BIMserver as a strong base for every kind of IFC model.

What surprised us was the caching and memory usage of the database. We decided to spend some time on that and expect to have better performance metric in the next release.

Slow but stable

Many of you probably know the story of the Tortoise and the Hare. After the 1.0 final release of software this story might appropriate. The 1.0 release was months behind our originally planned releasedate (summer 2010), but it has proven to be a winner in stability. Since the releasedate the download counter is now over 300 downloads. First feedback from users is unanimous: performing slow but stable.
After a release the development team always steps back and has a good look at the core again. Building features is easy, but building them on top of a reliable foundation is the key to stability. After the 1.0 final release we have done the same and created a list of ‘critical’ things for the new core. Until now the performance speed of the software never became an issue, but we cannot ignore this anymore. We even found a sponsor to finance this (name will be public soon…).  That is why ‘lazy loading’ of database objects is on of the first new implementations for the next release. There’s already a lot of work done on that in the source code trunk. Thanks to intensive usage statistics of fanatic users, we also found that alphabetic sorting of projectnames really (yes, really) slowed things down. This kind of userfeedback is very valuable.
The biggest dilemma right now is the choice between releasing a ‘performance update’ that would be 1.0.1 or keep on developing until the 1.1 release is stable. Let us know what you think….

Update: Lots of improvements have been made to speed since this post from 2011!