Nassim Portfolio


Let's talk about concurrency

Hello, guys, today we'll tackle one of the most complex problem in computer programming called concurrency.

To be honest with you, we'll not really tackle the problem but see how some bright people resolved the issue and see some technologies that are based on these concepts.

Nowadays it's a fact that computing is scaling horizontally, meaning that we do not improve CPU frequencies so much but much more CPU cores number. This is because it is much more costly to do the first and that because the second approach is much more flexible. It is much easier to add a processor than to change his frequency and manage the heat and energy consumption.

We'll talk about these patterns that apply equaly in thread communication as of data management systems and briefly see how it works.


Most of computer scientist learn this pattern first, because historically it was the first to be implemented. In linux programming they're refered as Semaphores and provides mutual exclusion. The idea is that when you want to access a resource that is being modified by someone else, you need to wait until the modification is done (and you're notified) so you can access it. This ensures you data modification is pure, meaning that it does not matter how much time you run your program you'll have the same expected behavior.

For threads communication a pattern called CSP (Communicating Sequential Processes) is also blocking. When you want to communicate with a thread you wait until the message read is finished. With the years, this model was improved and allows asynchronous channel communication being one step nearer from the Actor pattern. This is the main communication pattern for Go programming language.

For example when you want to edit an entry in an InnoDB MySQL table, the row will be locked until the modification is commited in database. All ACID databases implement this pattern.


This pattern is based on the same principle but allows some more flexibility, the idea of the transaction is that you'll get the data when you want, you edit it as you wish on a temporary log and then you commit your modifications hoping that there is no conflict, otherwise you just retry by taking the most fresh data and replay the modifications on it hoping that it works of course. If not you rollback.

In databases this is used a lot to optimize big bunchs of operations on a dataset, in MySQL it is called TRANSACTION. Most ACID databases implement this pattern.

In programming, we call it STM (Software Transactional Memory) but it is exactly the same idea with some improved versions. One of the benefit of this technic is that you can compose your transactions by making transactions of transactions, allowing more heavy operation batching. It is a nice pattern which is implemented in Haskell programming language. See Haskell wiki for more informations.


When you start working with real world concurrent applications no matter which of the preceeding pattern you use, you find yourself quickly overwhelmed with all the locking / transaction code you need to manage this. That's why the Actors pattern appeared, the idea is to see each thread as an actor that receive a message, act and send another message to someone else if necessary. This way, because the same data is never accessed at the same time by various actors, we do not even have to manage concurrent accesses.

In order to be sure to never have a shared memory usage we need to introduce a new concept, immutability. It ensures that any modification on a specified memory space will be stored in another memory space. One of the problem introduced is that a naive implementation will just blow up your program memory, that's why programming languages like Erlang or Haskell that are immutables, does some memory optimizations under the hood.

Erlang provide a native implementation of this pattern with some fine tools to manage the spawned processes and communication message updates.

As you noticed this time we talked about the threads communication part first, this is because I do not know any database that implements this model for now. The closer data management system that I can think of would be the Blockchain. Were the message is sent from an actor to another until the transaction is verified and completed, with a lot more occuring between these simplified steps.


These are some of the most known patterns, each has his drawbacks and benefits. I presented them in my personal preference order but I'd recommend you to test which one fits best with your needs. For example, actors pattern may not fit for low loaded applications because simplified programming is counterweighted by message queues management and process scheduling.


To finish I wanted to talk briefly about a pattern that is pretty new, and as some of you guys I'm just discovering it, the disruptor pattern. The idea is that you'll have a ring of messages space, one per producer, that will be accessible by as much consumer as you want. A consumer can also be a producer. The concept is interesting and seems to have good performances for some usecases. See Disruptor-1.0.pdf for mor details.

Thanks for reading,

See you soon.

See more

Using CDN caching with GraphQL


Hi, today we'll see how-to do a CDN GraphQL caching system in some quick, easy and dirty steps. The idea came to me to make this article because the other day I was reading a Blog post about some reasons you should not use GraphQL in production. I've been using it for a bit on production websites and I disagree with some of the arguments presented in the paper.

One of them was that you cannot do CDN caching with GraphQL because almost no CDN manage POST body cache management. That's why I'll show you in this article how you could (but shouldn't) implement GraphQL GET request caching to do (almost) the same as a JSON REST API.

Creating a simple GraphQL server

So let's begin by creating a GraphQL server with a really simple schema offering a simple hello world field. I used Koa (because it is better than express :troll:) and expose a /graphql endpoint.

const Koa = require('koa');
const Router = require('koa-router');
const graphqlHTTP = require('koa-graphql');

const {buildSchema} = require('graphql');

const app = new Koa();
const router = new Router();

const schema = buildSchema(`
  type Query {
    hello: String

const root = {
  hello: () => {
    return 'Hello world!';

router.all('/graphql', graphqlHTTP({
 rootValue: root,
 graphiql: true



Then we do a graphql query to see if it works. Annnnnd Yes.

curl -X GET ""
{"data":{"hello":"Hello world!"}}

As you can see I'm passing a query parameter to define which GraphQL action I want to do on server.


Actually our caching could already work as is, as GraphQL query is passed on GET method and not on POST. But, because GraphQL queries can be very big, the default option is to use POST. The solution to still use GET request is to compress the query using lz-string in order to not have any problem while passing our queries.

So to do that, we'll create a fake client that will add query string compression and force GET request.

const rp = require('request-promise');
const LZString = require('lz-string');

function prepareQuery(query) {
 return LZString.compressToBase64(query);

async function sendQuery() {
 return rp({
  method: 'GET',
  uri: '',
  qs: {
   query: prepareQuery('{hello}')

 .then(res => console.log(res))

So the query will now look like this :

REQUEST emitting complete
{"data":{"hello":"Hello world!"}}

Now to handle this type of query we need to add some reaaaally simple logic to our GraphQL middleware that will decode the query before sending it to our schema treatment routine. This is done by using the function handler that is executed at the very beginning of GraphQL koa middleware.

router.all('/graphql', graphqlHTTP((request, response, ctx) => {
 const decodedQuery = LZString.decompressFromBase64(request.query.query);
 ctx.query = {query: decodedQuery};
 return {
  rootValue: root,
  graphiql: true

As you can see it very simple and straightforward to make it happen. So why isn't it the default GraphQL implementation ?


Actually GraphQL implementations, I'm mainly talking about Apollo Server, as it is the one I know most, implement caching a very different way. By caching received data on client side, they optimized the requesting process, because clients manage locally their own caches you can avoid a lot of useless requests to the server.

The other drawback of this CDN server caching is that GraphQL queries may slightly change and therefore create lot of cache entries for each clients.

To conclude, I'd say this implementation could be useful on some specific cases where clients always do the same requests and you cannot add client caching system (on some legacy system) but it should be avoided as much as possible as it is not the most efficient way to do GraphQL query caching.

Go see my website for some more data about who am I :

See more

New project launched : FlyersWeb/react-presentation

FlyersWeb/react-presentation By FlyersWeb
Slides from a React Presentation I made for other developers
September 6, 2017 at 12:48AM
via GitHub

See more

Best free online courses

The 10 best free online courses of 2016 according to the data

See more

New project launched : FlyersWeb/file-extension-api

FlyersWeb/file-extension-api By FlyersWeb
A File extension classification API because it didn't exists
September 9, 2016 at 01:16AM
via GitHub

See more

New project launched : FlyersWeb/docker-dropbackup

FlyersWeb/docker-dropbackup By FlyersWeb
Sync dropbox folder without installing anything besides docker
June 18, 2016 at 04:56PM
via GitHub

See more

New project launched : FlyersWeb/docker-aria2

FlyersWeb/docker-aria2 By FlyersWeb
Aria2 Docker version
June 5, 2016 at 06:43PM
via GitHub

See more

101 Ways to Make Your Website More Awesome

“I need a checklist. I don’t know how to build a website. That’s why I need to hire someone. But I still want to know what’s involved.” - A checklist to check that you made a good website

See more

Let’s Encrypt is a new Certificate Authority: It’s free, automated, and open

Let’s Encrypt has issued its millionth certificate, helping to secure approximately 2.4 million domains. This milestone means a lot to a team that started building a CA from scratch 16 months ago with an aim to have a real impact on the security of the Web

See more

Process manager for Node.js apps

PM2 is a production process manager for Node.js applications with a built-in load balancer. It allows you to keep applications alive forever, to reload them without downtime and to facilitate common system admin tasks.

See more

Publish your npm right


I've been working hard lately to publish my last works. For some time I was working on an evaluation platform for IT recruiters. The idea was to be able to create quizzes and auto-evaluate them. At the beginning I was thinking of a full website to generate and evaluate. But with the time clock going on, I decided to focus on the evaluation engine.

To more easily integrate it on my other projects, I decided to make it an express middleware and to share it as an npm dependency as of the node philosophy is do one thing but do it right.

How to

To allow users to access your package easily  and to avoid breaking things, you should create a tag version for your package according to your packages.json :

git tag -a v0.0.1 
git push --tags

Actually it was really easy to publish it. First of all, you'll need an npm account by typing the following commands :

npm set "Your Name"
npm set "[email protected]"
npm set ""
npm adduser

After that you'll only have to publish it on npm :

npm publish ./ --tag v0.0.1

And that's it, your package is online with the specified tag installed by default.


Remember to carefully manage your tags to avoid breaking your users applications and try to use continuous integration to maintain more easily your project.

Go have a look at mine :

Have fun

See more

Manage projects without leaving GitHub

Transform GitHub into a full-featured project management solution. ZenHub’s features display directly in GitHub’s UI.

See more

Trello est un moyen visuel d'organiser ce que vous voulez avec qui vous voulez.

Oubliez les notes auto-collantes, les tableurs, e-mails et autres logiciels compliqués pour la gestion de vos projets

See more

Strider: Open Source Continuous Integration & Deployment Server.

Strider is an Open Source Continuous Deployment / Continuous Integration platform. It is written in Node.JS / JavaScript and uses MongoDB as a backing store. It is published under the BSD license.

See more

New project launched : FlyersWeb/sharetc

FlyersWeb/sharetc By FlyersWeb
share your files securely without any server
March 9, 2016 at 11:43PM
via GitHub

See more

GitHub - googleanalytics/autotrack: Automatic + enhanced analytics.js tracking for common user interactions

The default JavaScript tracking snippet for Google Analytics runs when a web page is first loaded and sends a pageview hit to Google Analytics. If you want to know about more than just pageviews (e.g. events, social interactions), you have to write code to capture that information yourself. Since most website owners care about most of the same types of user interactions, web developers end up writing the same code over and over again for every new site they build. Autotrack was created to solve this problem. It provides default tracking for the interactions most people care about, and it provides several convenience features (e.g. declarative event tracking) to make it easier than ever to understand how people are using your site. The autotrack.js library is small (3K gzipped)

See more

Cross Domain AngularJS + NodeJS


Last time I was working on a search engine web based application using AngularJS and NodeJS. I wanted to add some data using a Node webservice running on port 8080 of my virtual machine. So I made my post request as usual using $http module but soon I discovered that changing the server port was causing a Cross Domain request. So in this article I'll describe how I resolved the problem so my server accepts the request.


First of all you need to know that when setting specific headers in a Cross Domain Request, the browser will make an OPTIONS request first to check if he can actually make the POST request.
Knowing that we first need to accept the request on server side by returning Allow Origin specific headers :

Access-Control-Allow-Origin: "*"
Access-Control-Allow-Methods: "GET,HEAD,PUT,PATCH,POST,DELETE"
Access-Control-Allow-Credentials: "true"
Access-Control-Allow-Headers: "content-type"

By answering with these headers, you say that you accepted the Cross Domain request. You can also specify a specific server origin if you need to limit the API accessibility.

As said before, this is not enough, because the browser generates an OPTIONS request, your controller will be called twice on ExpressJS side. This is why you should add a middleware to catch the OPTIONS request and return a 204 no content response.

After that, you'll have a fully compliant Cross Domain Request system in your ExpressJS.


As you know, you're rarely the first to face the problem, this is why I prefer not to recreate the wheel and reuse good pieces of software. In this case, you can use the cors npm, allowing you to easily integrate the Cross Domain Request. You can find it at and it is easy as abc to use.

Hope it helps, and have fun.

See more

Coursera - Cours gratuits en ligne des meilleures universités

Suivez en ligne les meilleurs cours du monde

See more

facebook/wangle · GitHub

Wangle provides a full featured, high performance C++ futures implementation. Going forward, Wangle will also provide a set of common client/server abstractions for building services in a consistent, modular, and composable way.

See more

Using ES6 Harmony with NodeJS

Using ES6 Harmony with NodeJS tutorial

See more

New project launched : FlyersWeb/postgresql

FlyersWeb/postgresql By FlyersWeb
Ansible role for PostgreSQL
December 31, 2015 at 03:25PM
via GitHub

See more

New project launched : FlyersWeb/vagrant-docker-ansible

FlyersWeb/vagrant-docker-ansible By FlyersWeb
A vagrant shipped with a docker shipped with an ansible provisioning an nginx
December 25, 2015 at 08:27PM
via GitHub

See more

TensorFlow is an Open Source Software Library for Machine Intelligence

Google open sourced machine learning toolbox project. CPU + GPU compatible

See more

New project launched : FlyersWeb/DesignPatternsPHP

FlyersWeb/DesignPatternsPHP By FlyersWeb
sample code for several design patterns in PHP
December 16, 2015 at 11:49PM
via GitHub

See more

New project launched : FlyersWeb/funct

FlyersWeb/funct By FlyersWeb
A PHP library with commonly used code blocks
December 16, 2015 at 11:39PM
via GitHub

See more

mattimustang/kalify · GitHub

Automate your Penetration Testing toolchain. Ansible roles for penetration testing tools easy install.

See more

New project launched : FlyersWeb/bitcannon-ansible

FlyersWeb/bitcannon-ansible By FlyersWeb
Add bitcannon automatic installation
November 28, 2015 at 12:18AM
via GitHub

See more

New project launched : FlyersWeb/dhtbay-ansible

FlyersWeb/dhtbay-ansible By FlyersWeb
DHT Bay automatic deployment script
November 28, 2015 at 12:12AM
via GitHub

See more

New project launched : FlyersWeb/aria2-ansible

FlyersWeb/aria2-ansible By FlyersWeb
Ansible deployment script to install aria2 as a service
November 27, 2015 at 10:52PM
via GitHub

See more

The best vagrant docker ansible development environment

Hello à tous,

It's been a while since my last post. I'm very busy lately because of some changes in my career.

But lately I was playing and viewing some presentations about the new development tools going on. Some really good stuff is waiting to be deployed and played with.

I'll share a development environment set up for web development that I've in mind.

First of all, I don't like to have development tools to be installed on my real system. Do not know why exactly but I think that my security background is telling me to not let any tool available to a possible attaquer. That's why I would use a virtual image for my development environment. And who's telling virtual image should almost immediately think about vagrant. This virtual image manager can work with virtual box and vmware boxes. It also configure automatically your ssh connection and file sharing. Easy to deploy and package prepared boxes.

Besides that you can use it with well known provisioning systems. A provisioning tool will allow you to execute a set of predefined commands to configure your system. My favorite one is ansible because it uses yaml and because of it's really simple syntax and script galery (called galaxy). Thanks to it you could easily deploy your favorite development tools and or softwares. But in my opinion an interesting option could be to install docker using your ansible configuration.

Docker is based on an old technology that is rapidly becoming a state of the art. As of BSD based systems with their junk tool, docker allows you to run a process in his own environment totally separated from others. Each process is called a container allowing you to have a lot of different atomic processes. It's just like having a virtual machine for each process! But, in order to have your docker to work, you'll need to install a specific linux kernel module.

Because i really like to have a clean machine, i'll recommend people to use a vagrant to install this specific development software (I know that some people use it as a day to day tool but it is another use case). The only problem introduced by docker is that you'll soon have a lot of containers running in parallel and some will depend from one another. That's why docker bought Fig, another tool for container dependencies management.

So now that we have our virtual box environment packed with native docker multicontainer support, we need to think about our application architecture. Every web application nowadays use the same pattern with a front-end communicating with a back-end using a database. While developing you'll face different challenges that you'll resolve though complexe algorithms. These solutions needs to work for your local development environment but not only. You'll need for example to follow the new version of your interpreter or compiler and see if all your hard work is actually compatible. For this you'll as for each project need a good layer of tests (unit and functional) that you could pass on each necessary version. This way upgrading will not be such a pain in the ass. Another big problem is when you create a new solution that actually works on your local data but lamely bug on real production data. To avoid such bad news, you could with docker container, launch a sql server instance with development data or production data really easily. You could also create profiling configuration to follow based on it so bad optimized code will be rejected automatically (see blackfire PHP profiling tool)

So to resume, to have our best development environment we'll need a virtual image with docker integrated for our continuous integration tests and profiling based on rock solid automated testing launched with corresponding data programmatically.

Let me know what you think of such development environment and I may post a more technical post on this subject.

Remember to have fun.

A+ Flyers.

See more

Kaggle: Go from Big Data to Big Analytics

Machine learning challenges for education, research, and industry.

See more

Azawad région stratégique

Bonjour a tous,

Je me permets de partager un lien de l'interview de l'ancien responsable des relations extérieur du MNLA sur les enjeux de la région de l'azawad qui vit une terrible situation humanitaire.

Bonne lecture :


See more

Scott Hanselman's 2014 Ultimate Developer and Power Users Tool List for Windows - Scott Hanselman

These are all well loved and oft-used utilities. I wouldn't recommend them if I didn't use them constantly. Things on this list are here because I dig them. No one paid money to be on this list and no money is accepted to be on this list.

See more

New project launched : FlyersWeb/xhprof

FlyersWeb/xhprof By FlyersWeb
XHProf is a function-level hierarchical profiler for PHP and has a simple HTML based user interface.
September 11, 2015 at 05:33PM
via GitHub

See more

New project launched : FlyersWeb/big-list-of-naughty-strings

FlyersWeb/big-list-of-naughty-strings By FlyersWeb
The Big List of Naughty Strings is a list of strings which have a high probability of causing issues when used as user-input data.
September 9, 2015 at 03:49PM
via GitHub

See more

BIDData/BIDMach · GitHub

A Deep Learning Library using optimized GPGPU core offering great performances (lack more documentation)

See more

TradingView: Free Stock Charts and Forex Charts Online.

The best on the web stock charts and a community of investors who are passionate about sharing trading ideas.

See more

New project launched : FlyersWeb/hupothesis

FlyersWeb/hupothesis By FlyersWeb
Evaluation form platform using NodeJS
July 11, 2015 at 09:00PM
via GitHub

See more


Cross programming language Web platform for Artificial Intelligence research and education. (Put your algorithms through their paces!)

See more

10 outils Open Source indispensables pour maîtriser le Cloud

Si le nom d’OpenStack revient sans cesse quand on évoque le Cloud, il existe toute une nébuleuse de projets Open Source autour du Cloud. Nous avons recensé les meilleures initiatives dans le domaine de la programmation, des OS et de l’orchestration.

See more