Release of Gospel Cloud Platform v1.1
We’re delighted to announce the availability of Gospel Cloud Platform version 1.1 today. This release focuses on delivering a super robust, enterprise-ready product and ensuring we can scale to meet every possible client need.
We’ve upgraded the version of Hyperledger Fabric that sits behind the system to 1.0. This is the first production release of Fabric and is a huge step forward from 0.6, which, in many ways, was more of a proof of concept than a finished product. This gives us access to a stable PKI based certificate authority infrastructure which we also leverage within our own code and a much more sensible system of endorsement and ordering (if you’re not sure what this means, we’ll cover it in more depth at some point soon). We can also now utilise CouchDB as part of the peers themselves meaning queries are significantly quicker.
On the Gospel side, we’ve focused on two main items other than the upgraded Hyperledger. We’ve totally rewritten our chaincode (the code that runs within the peers themselves) in Go. There are two reasons for this: Go is much faster for this kind of load than Java, meaning we’ve seen significant performance gains; secondly, Fabric itself is written in Go, so the level of support is much better. We have also taken a serious look at the best way to provide all three kinds of Gospel deployments – on premises, cloud and hybrid – in a uniform way, and how we can best maintain the highest level of security whilst accounting for the distributed nature of our application.
The standard deployment model for Hyperledger Fabric is to deploy all the peers either on one Docker host (not scalable, not robust and easy to attack) or on many virtual machines (messy, hard to secure, slow to scale). We’ve therefore decided that Kubernetes provides the best balance between these concerns. If you’re not familiar, Kubernetes (or k8s for short) is a Google product which allows Docker-style containers to be fully orchestrated across heterogeneous environments. It also provides automatic scaling and a very good degree of fault tolerance. We’re also using the associated Kops project to manage our cloud deployments.
In practical terms, this means that we now have an infrastructure where, if a peer fails for any reason, it will be replaced with a new one, which will join the network and sync up without our intervention. It also means that communication across hosts is fully secured and encapsulated, and all our logging and reporting is to one centralised display, giving us full insight into what the system is doing at any time. For instance, here’s a view of traffic between some of the Hyperledger nodes on our test system:
We’ve also now got the ability to seamlessly add nodes within other cloud systems and for them to automatically add more peers, orderers and so on, to handle more load. This means that even if, for instance, we decide that we want hosts in two AWS availability zones in Dublin, one in London, one in Frankfurt and then decide to add a host in each of Azure and Bluemix, we can do this without any interruption to service and still maintain visibility of everything in one place – without compromising the independence of the nodes. And if the system became overloaded, it would automatically add more capacity based on rules we’ve specified.
This all adds up to both a much more enterprise ready platform and a solid foundation for all the exciting features we’ve got planned for the next few months.
If you want to see how we can help your organisation do get in touch with our team!