Blockchains at IBM

img_6985-2I attended IBM Edge 2016 (videos available here, login required) this past week and there was a lot of talk about their new blockchain service available on z Systems (LinuxONE).

IBM’s blockchain software/service  is based on the open source, Open Ledger, HyperLedger project.

Blockchains explained

1003163361_ba156d12f7We have discussed blockchain before (see my post on BlockStack). Blockchains can be used to implement an immutable ledger useful for smart contracts, electronic asset tracking, secured financial transactions, etc.

BlockStack was being used to implement Private Key Infrastructure and to implement a worldwide, distributed file system.

IBM’s Blockchain-as-a-service offering has a plugin based consensus that can use super majority rules (2/3+1 of members of a blockchain must agree to ledger contents) or can use consensus based on parties to a transaction (e.g. supplier and user of a component).

BitCoin (an early form of blockchain) consensus used data miners (performing hard cryptographic calculations) to determine the shared state of a ledger.

There can be any number of blockchains in existence at any one time. Microsoft Azure also offers Blockchain as a service.

The potential for blockchains are enormous and very disruptive to middlemen everywhere. Anywhere ledgers are used to keep track of assets, information, money, etc, that undergo transformations, transitions or transactions as they are further refined, produced and change hands, can be easily tracked in blockchains.  The only question is can these assets, information, currency, etc. be digitally fingerprinted and can that fingerprint be read/verified. If such is the case, then blockchains can be used to track them.

New uses for Blockchain

img_6995IBM showed a demo of their new supply chain management service based on z Systems blockchain in action.  IBM component suppliers record when they shipped component(s), shippers would record when they received the component(s), port authorities would record when components arrived at port, shippers would record when parts cleared customs and when they arrived at IBM facilities. Not sure if each of these transitions were recorded, but there were a number of records for each component shipment from supplier to IBM warehouse. This service is live and being used by IBM and its component suppliers right now.

Leanne Kemp, CEO Everledger, presented another example at IBM Edge (presumably built on z Systems Hyperledger service) used to track diamonds from mining, to cutter, to polishing, to wholesaler, to retailer, to purchaser, and beyond. Apparently the diamonds have a digital bar code/fingerprint/signature that’s imprinted microscopically on the diamond during processing and can be used to track diamonds throughout processing chain, all the way to end-user. This diamond blockchain is used for fraud detection, verification of ownership and digitally certify that the diamond was produced in accordance of the Kimberley Process.

Everledger can also be used to track any other asset that can be digitally fingerprinted as they flow from creation, to factory, to wholesaler, to retailer, to customer and after purchase.

Why z System blockchains

What makes z Systems a great way to implement blockchains is its securely, isolated partitioning and advanced cryptographic capabilities such as z System functionality accelerated hashing, signing & securing and hardware based encryption to speed up blockchain processing.  z Systems also has FIPS-140 level 4 certification which can provide the highest security possible for blockchain and other security based operations.

From IBM’s perspective blockchains speak to the advantages of the mainframe environments. Blockchains are compute intensive, they require sophisticated cryptographic services and represent formal systems of record, all traditional strengths of z Systems.

Aside from the service offering, IBM has made numerous contributions to the Hyperledger project. I assume one could just download the z Systems code and run it on any LinuxONE processing environment you want. Also, since Hyperledger is Linux based, it could just as easily run in any OpenPower server running an appropriate version of Linux.

Blockchains will be used to maintain the system of record of the future just like mainframes maintained the systems of record of today and the past.

Comments?

 

Scality’s Open Source S3 Driver

img_6931
The view from Scality’s conference room

We were at Scality last week for Cloud Field Day 1 (CFD1) and one of the items they discussed was their open source S3 driver. (Videos available here).

Scality was on the 25th floor of a downtown San Francisco office tower. And the view outside the conference room was great. Giorgio Regni, CTO, Scality, said on the two days a year it wasn’t foggy out, you could even see Golden Gate Bridge from their conference room.

Scality

img_6912As you may recall, Scality is an object storage solution that came out of the telecom, consumer networking industry to provide Google/Facebook like storage services to other customers.

Scality RING is a software defined object storage that supports a full complement of interface legacy and advanced protocols including, NFS, CIGS/SMB, Linux FUSE, RESTful native, SWIFT, CDMI and Amazon Web Services (AWS) S3. Scality also supports replication and erasure coding based on object size.

RING 6.0 brings AWS IAM style authentication to Scality object storage. Scality pricing is based on usable storage and you bring your own hardware.

Giorgio also gave a session on the RING’s durability (reliability) which showed they support 13-9’s data availability. He flashed up the math on this but it was too fast for me to take down:)

Scality has been on the market since 2010 and has been having a lot of success lately, having grown 150% in revenue this past year. In the media and entertainment space, Scality has won a lot of business with their S3 support. But their other interface protocols are also very popular.

Why S3?

It looks as if AWS S3 is becoming the defacto standard for object storage. AWS S3 is the largest current repository of objects. As such, other vendors and solution providers now offer support for S3 services whenever they need an object/bulk storage tier behind their appliances/applications/solutions.

This has driven every object storage vendor to also offer S3 “compatible” services to entice these users to move to their object storage solution. In essence, the object storage industry, like it or not, is standardizing on S3 because everyone is using it.

But how can you tell if a vendor’s S3 solution is any good. You could always try it out to see if it worked properly with your S3 application, but that involves a lot of heavy lifting.

However, there is another way. Take an S3 Driver and run your application against that. Assuming your vendor supports all the functionality used in the S3 Driver, it should all work with the real object storage solution.

Open source S3 driver

img_6916Scality open sourced their S3 driver just to make this process easier. Now, one could just download their S3server driver (available from Scality’s GitHub) and start it up.

Scality’s S3 driver runs ontop of a Docker Engine so to run it on your desktop you would need to install Docker Toolbox for older Mac or Windows systems or run Docker for Mac or Docker for Windows for newer systems. (We also talked with Docker at CFD1).

img_6933Firing up the S3server on my Mac

I used Docker for Mac but I assume the terminal CLI is the same for both.Downloading and installing Docker for Mac was pretty straightforward.  Starting it up took just a double click on the Docker application, which generates a toolbar Docker icon. You do need to enter your login password to run Docker for Mac but once that was done, you have Docker running on your Mac.

Open up a terminal window and you have the full Docker CLI at your disposal. You can download the latest S3 Server from Scality’s Docker hub by executing  a pull command (docker pull scality/s3server), to fire it up, you need to define a new container (docker run -d –name s3server -p 8000:8000 scality/s3server) and then start it (docker start s3server).

It’s that simple to have a S3server running on your Mac. The toolbox approach for older Mac’s and PC’S is a bit more complicated but seems simple enough.

The data is stored in the container and persists until you stop/delete the container. However, there’s an option to store the data elsewhere as well.

I tried to use CyberDuck to load some objects into my Mac’s S3server but couldn’t get it to connect properly. I wrote up a ticket to the S3server community. It seemed to be talking to the right port, but maybe I needed to do an S3cmd to initialize the bucket first – I think.

[Update 2016Sep19: Turns out the S3 server getting started doc said you should download an S3 profile for Cyberduck. I didn’t do that originally because I had already been using S3 with Cyberduck. But did that just now and it now works just like it’s supposed to. My mistake]

~~~~

Anyways, it all seemed pretty straight forward to run S3server on my Mac. If I was an application developer, it would make a lot of sense to try S3 this way before I did anything on the real AWS S3. And some day, when I grew tired of paying AWS, I could always migrate to Scality RING S3 object storage – or at least that’s the idea.

Comments?

NetApp updates their StorageGRID Webscale solution

grid001NetApp announced a new version of their object storage solution, the StorageGRID WebScale 10.3.

At a former employer, I first talked with StorageGRID (Bycast at the time) a decade or so ago. At that time, they were focused on medical and healthcare verticals and had a RAIN (redundant array of independent nodes) storage solution.  It has come a long way.

StorageGRID Business is booming

On the call, NetApp announced they sold 50PB of StorageGRID in FY’16 with 20PB of that in the last quarter and also reported 270% Y/Y revenue growth, which means they are starting to gain some traction in the marketplace. Are we seeing an acceleration of object storage adoption?

As you may recall, StorageGRID comes in a software only solution that runs on just about any white box server with DAS or as two hardware appliances: the SG5612 (12 drive); and the SG5660 (60 drive) nodes. You can mix and match any appliance with any white box software only solution, they don’t have to have the same capacity or performance. But all nodes need network and controller/admin node(s) access.

StorageGRID past

grid002Somewhere during Bycast’s journey they developed support for tape archives and information lifecycle management (ILM) for objects. The previous generation, StorageGrid 10.2 had a number of features, including:

  • S3 cloud archive support that allowed objects to be migrated to AWS S3 as they were no longer actively accessed
  • NAS bridge support that allowed CIFS/SMB or NFS access to StorageGRID objects, which could also be read as S3 objects for easier migration to/from object storage;
  • Hierarchical erasure coding option that was optimized for efficiently storing large objects;
  • Node level erasure coding support that can be used to rebuild data for node drive failures, without having to go outside the node data retrieval;
  • Object byte-granular range read support that allowed users to read an object at any byte offset without requiring rebuild;
  • Support for OpenStack Swift API that made StorageGRID objects natively available to any OpenStack service; and
  • Software support for running as Docker containers or as a VM under VMware ESX, or OpenStack KVM that allowed StorageGRID software to run just about anywhere.

StorageGRID present and future

grid003But customers complained StorageGRID was too complex to install and update which required too much hand holding by NetApp professional services. StorageGRID Webscale 10.3 was targeted to address these deficiencies. Some of the features in StorageGrid 10.3, include:

  • Radically simplified, more modern UI, new dashboard and policy wizard/editor, so that it’s a lot easier to manage the StorageGRID. All features of the UI are also available via RESTfull API access and the UI is the same for white box, software only implementations as well as appliance configurations.
  • Simplified automated installation scripts, so that installations that used to take multiple steps, separate software installs and required professional services support, now use a full-solution software stack install, take only minutes and can be done by the customers alone;
  • S3 object versioning support, so that objects can have multiple versions, limited via the UI, if needed, but provide a snapshot-like capability for S3 data that protects against object accidental deletion.
  • grid004ILM policy change predictions/modeling, so that admins can now see how changes to ILM policies will impact StorageGRID.
  • Even more flexibility in DAS storage, so that future StorageGRID configurations can support 10TB drives and 6TB FIPS-140 drive encryption support, which adds to the current drive capacity and data security options already available in StorageGRID.

To top it all off, StorageGRID 10.3 improves performance for both small (30KB) and large (300MB) object get/puts.

  • Small S3 Load Data Router (LDR, 1-thread) object performance has improved ~4X for both PUTs and GETs; and
  • Large S3 LDR (1-thread) object performance has improved ~2X for PUTs and ~4X for GETs.

Object storage market heating up

grid005Apparently, service providers are adopting object storage to  provide competition to AWS, Azure and Google cloud storage for backup and storage archives as well as for DR as a service. Also, many media and other customers managing massive data repositories are turning to object storage to support their multi-site, very large file libraries.  And as more solution vendors support S3 object protocols for data access and archive, something like StorageGRID can become their onsite-offsite storage alternative.

And Amazon, Azure and Google are starting to realize that most enterprise customers are not going to leap to the cloud for everything they do. So, some sort of hybrid solution is needed for the long term. Having an on premises and off premises object storage solution that can also archive/migrate data to the cloud is a great hybrid alternative that takes enterprises one step closer to the cloud.

Comments?

#VMworld day 1, Cloud Foundation and Cross-Cloud Services

The main keynote topic for today at VMworld was how to address the coming cloud tsunami. Pat citing his own researchers believes that 50% of all workloads (OS instances) will be running in public and private cloud by 2021 and by 2030, 50% of all workloads will be running in the Public Cloud alone. So today VMware announced two new offerings: VMware Cloud Foundation and VMware Cross-Cloud Services.

Cloud Foundation

Cloud Foundation appears to be a bundling of VMware’s SDDC, NSX®, Virtual SAN™ (VSAN) and vSphere® solutions, into a single, integrated stack/package that can be sold and licensed together. No pricing was provided at the show but essentially VMware want’s to allow customers a simple way to deploy a VMware private cloud.

VMware states that Cloud Foundation offers customers up to 6-8X faster cloud deployment at a TCO savings of >40%.

VMware also announced a joint partnership with IBM to sell Cloud Foundation services residing on the IBM Cloud to their customer base. This broaden’s the availability of VMware cloud service offerings beyond vCloud and on premises Cloud Foundation environments.

Cross-Cloud Services

IMG_6819Everyone wants to minimize cloud vendor lockin but that’s not possible today except in a few special cases (NetApp Private Storage and similar capabilities from other vendors, cloud storage gateway services, cloud archive services, etc.).

VMware Cross-Cloud Services is the next step down this path, attempting to provide easier workload/data migration, consolidated cost and workload management and security deployment across the public and private cloud boundaries.

Cross-Cloud Services was in tech preview at the show but it’s intended to make use of standard public cloud defined APIs to provide specialized targeted services to allow better cross-cloud migration and management.

The tech preview showed VMware Cross-Cloud Services deploying an NSX gateway in AWS which allowed NSX to control public cloud IP addresses and then once that was done, one could apply security templates to deploy network encryption between apps and its services. VMware used a sniffer to show the before plain text traffic and the after with encrypted traffic, all done in a matter of minutes. They also showed cost trending information for workloads running across the private and public cloud.

Next they showed a demo (movie) of VMware migrating/cloning a simple app to other public and private cloud environments. They had a public cloud Unicycle IOT app running in Ireland/AWS (I think) with a three tier (web, app, database) app structure/instances and then migrated/cloned that single site 3-tier app to be deployed across multiple cloud (web and app tiers) sites with a single database instance running in a private cloud.

I started thinking this is getting us down the path towards cloud virtualization but in the end, it’s much more targeted services, which run in instances/gateways in the public and private cloud to do very specific migration or management activities. Nonetheless a great first step towards more flexible cross-cloud deployment and management.

VMworld Day 2 looks to be more on current products and enhancements, stay tuned.

Comments?

Microsoft ESRP database transfer performance by storage interface – chart of the month

SCIESRP160728-001The above chart was included in our e-newsletter Microsoft Exchange Solution Reviewed Program (ESRP) performance report, that went out at the end of July. ESRP reports on a number of metrics but one of the more popular is total (reads + writes) Exchange database transfers per second.

Categories reported on in ESRP include: over 5,000 mailboxes; 1001 to 5000 mailboxes; and 1000 and under mailboxes. For the above chart we created our own category using all submissions up to 10,000 mailboxes. Then we grouped the data using the storage  interface between the host Exchange servers and the storage, and only included ESRP reports that had 10 KRPM disk drives.
Continue reading Microsoft ESRP database transfer performance by storage interface – chart of the month

Facebook moving to JBOF (just a bunch of flash)

At Flash Memory Summit (FMS 2016) this past week, Vijay Rao, Director of Technology Strategy at Facebook gave a keynote session on some of the areas that Facebook is focused on for flash storage. One thing that stood out as a significant change of direction was a move to JBOFs in their datacenters.

As you may recall, Facebook was an early adopter of (FusionIO’s) server flash cards to accelerate their applications. But they are moving away from that technology now.

Insane growth at Facebook

Why? Vijay started his talk about some of the growth they have seen over the years in photos, videos, messages, comments, likes, etc. Each was depicted as a animated bubble chart, with a timeline on the horizontal axis and a growth measurement in % on the vertical axis, with the size of the bubble being the actual quantity of each element.

Although the user activity growth rates all started out small at different times and grew at different rates during their individual timelines, by the end of each video, they were all almost at 90-100% growth, in 4Q15 (assume this is yearly growth rate but could be wrong).

Vijay had similar slides showing the growth of their infrastructure, i.e.,  compute, storage and networking. But although infrastructure grew less quickly than user activity (messages/videos/photos/etc.), they all showed similar trends and ended up (as far as I could tell) at ~70% growth.
Continue reading Facebook moving to JBOF (just a bunch of flash)

IBM Research creates PCM synapses – cognitive computing, round 4

Last year we reported on IBM’s progress in taking PCM (phase change memory) and using it to create a new, neuromorphic computing architecture (see Phase Change Memory (PCM) based neuromorphic processors). And earlier we discussed IBM’s (2nd generation), True North chip and IBM’s (1st generation) Synapse Chip.

This past week IBM made another cognitive computing announcement. This time they have taken their neuromorphic technologies another step closer to precise emulation of neurological processing of the brain.

Their research paper was not directly available, but IBM Research has summarized its contents in a short web article with a video (see IBM Scientists imitate the functionality of neurons with Phase-Change device).
Continue reading IBM Research creates PCM synapses – cognitive computing, round 4

QoM1608: The coming IOT tsunami or not

Techpinions ChartSaw an interesting chart the other day in a post in TechPinions (Searching for What’s Next) showing the sales in millions over time of PCs, Tablets and Smart Phones.  From the chart, PC sales peaked 2010-2012 and that Tablet sales have at flat lined (2016). Not sure what’s projections vs. actuals but the story on SmartPhones have yet to run out and they had rapid sales growth between 2008 and 2014.

The other thing to take from this chart is that device adoption is speeding up. It took 20 years to reach peak PC sales but it only took ~10 years to reach peak Smartphones sales.
Continue reading QoM1608: The coming IOT tsunami or not