Bidding Farewell: The End of an Era on My Blog

It’s with a heavy heart that I sit down to write this blog post today. Over the years, this blog has been my virtual sanctuary, a space where I poured my thoughts on evolving technologies, shared my experiences on security best practices, and connected with an incredible community of readers and fellow bloggers. But, as life and career often takes unexpected turns, I’ve arrived at a difficult decision: it’s time for me to discontinue my journey here because of my new career commitments. The friendships I’ve made and the conversations I’ve had here are priceless, and I’ll carry them with me in my heart.

I’m not saying farewell forever, but rather bidding adieu to this chapter of my blogging journey. Who knows what the future may hold?

TLS 1.3 Released: Most secure Web based communication protocol – Now Available

Just saw the tweet… IETF finally released the long-awaited TLS 1.3 version of the secure Web communication, considered to be the most secure protocol specification that assures high-degree of security, privacy and also faster performance compared to its predecessor TLS 1.2.

The most compelling features of TLS 1.3 :

  • More secure, removed out-dated algorithms previously had known vulnerabilities from TLS cipher suites, which includes SHA-1, AES-CBC, 3DES/DES, RC4, and few more.
  • Faster than TLS 1.2 as it uses only one connection establishment handshake between the communication client and server.
  • Enables Perfect Forward Secrecy by default.  Adds privacy by additional encryption at the negotiation phase that restricts eavesdropping and deep packet traffic analysis.

For supported list of browsers, TLS implementations, and testing sites, refer to:

https://github.com/tlswg/tls13-spec/wiki/Implementations

Also, check out my previous blog entry that I posted a few months ago :).   Go TLS 1.3 !!

ATT&CK Navigator: Studying Cyberthreat intelligence from adversary tactics and exploits

Since inception, I had been following MITRE’s ATT&CK Navigator knowledge base for studying (threat modeling) pre and post-exploit techniques on Web, Mobile and Enterprise applications more particularly running on Windows and Linux systems. Indeed, it is a great resource for understanding the devil in the details of attack techniques and simulate it from simple hacking credentials from the initial access to exfiltration and till command & control. You can study and understand how adversaries launch and execute attacks and evolve a defense strategy based on the threats we potentially face.

MITRE announced the version 2 of the ATT&CK Navigator last week (l believe last week of May 2018).

MITRE Enterprise ATT&CK Framework for Cyber Threat Intelligence

To try and review each layer and define your custom attack matrix, you can interactively create layers within the Navigator or programmatically and then visualized via the Navigator.

For accessing Navigator from your browser:

https://mitre.github.io/attack-navigator/enterprise/

For accessing Navigator using Mobile:

https://mitre.github.io/attack-navigator/mobile/

Check out these URLs of ATT&CK Matrix for Linux, Windows, Mac….Will post more details on how to simulate an attack… soon.

Post-quantum Cryptography: Impacts, Algorithms, and Hybrid Approaches!

Topics in Cryptology CT-RSA 2018 (Springer Press)

After a week-long dose of non-stop security adventure, I am back from RSA Conference….and here is my quick dump on PQC!  “Post-quantum Cryptography (PQC) and strategies that resist quantum computer attacks on Public Key Cryptography” was one of the hottest topics discussed in Cryptographer’s panel and almost all cryptography panels and sessions – Not surprised at all!  While we do know there is no commercial availability of quantum computers exists, the current research activities at D-Wave Systems, Google, Microsoft and IBM shows that the infinitely large space of superposition states during computation and the entanglement of qubits allows quantum computers with an inherent advantage of handling exponential parallel computations resulting capabilities of handling millions and billions of processes or more done parallel at once.

Impacts of Quantum Computing

With quantum computing, the impact of Grover’s Algorithm and Shor’s Algorithm on the strength of existing Cryptographic schemes makes it more interesting.

Grover’s Algorithm gives a square-root speedup on key searching and can potentially brute-force algorithms with every possible key and break it. According to “Applying Grover’s Algorithm to AES: Quantum Resource Estimates“[1], this means that a brute-force attack on a symmetric cryptographic algorithm like AES128 requiring 2^256  AES-operations on traditional computers can be compromised with about 2^64 operations on a quantum computer.  In reality, Grover’s algorithm makes symmetric schemes vulnerable by a square-root factor reducing the effective key size and requires us to use doubling or even bigger key sizes that can help us resistant to Grover’s algorithm. This applies to hash algorithms (ex. SHA-256) where searching for preimages by a square-root factor (done with 2^128 operations) and search for collisions by a cube-root factor (done by 2^85.3 operations).  Any computationally hard algorithms in a traditional computer become less hard on a quantum computer because the security parameters like Key size chosen. Consequently, these security parameters on such algorithms must be constantly updated as quantum computing evolves.

Shor’s Algorithm efficiently solves integer factorizations and discrete logarithms in polynomial time on a quantum computer, which leads to easily breaking asymmetric cryptographic schemes like RSA and ECC. Interestingly, it cannot be mitigated by increasing the Key size or other parameters of the asymmetric algorithms.

To the worst, any adversary who sniffs on the Internet traffic and records secure communications would be able to easily decrypt the recording using a quantum computer when it is available. We do know from several pieces of evidence that many governments are already recording and storing encrypted Internet traffic for data mining purposes (I may be wrong). This year, CT-RSA 2018 (Cryptographer’s Track at RSA 2018) sessions were overwhelmingly dominated by PQC algorithms.

Post-Quantum Cryptography Algorithms 

We do have several PQC algorithms already available (under early implementation or testing) that are known to resist attacks by quantum computers and with efficient cryptographic schemes (in terms of smaller key sizes and computation workload) it can be potentially used to replace existing asymmetric algorithms used in public-key cryptography

  • Lattice-based cryptography – Proposed by M.Ajtai 1996[2], one of the early cryptographic schemes relied on the hardness of computational lattice problems, which led to the creation of NTRU Public key encryption scheme [3] based algebraically structured lattices.  In 2005, Regev[43] introduced the Learning With Errors (LWE) problem (based on Lattice problem) which serves as the basis for a variety of public-key encryption and signature schemes. Following LWE, in 2010, Lyubashevsky, Peikert, and Regev introduced the Ring-Learning With Errors (Ring-LWE) which used an additional structure that allows for smaller key sizes. An NTRU implementation is available and it can be used with multiple commercial-grade crypto implementations (ex. OpenSSL, wolfSSL, BouncyCastle, and few others).
  • Multivariate cryptography – Based on the difficulty of solving non-linear usually quadratic, polynomial over a finite field. The hardness of the system depends on the size of the finite field, variables and the degree of the system. For building asymmetric public key system, the public key is a set of multivariate quadratic polynomials and the private key is the knowledge of a trapdoor that allows solving the multi-variate system. Based on random multivariate systems can also be used for pseudo-random-number generators, cryptographic hash functions, and symmetric encryption.
  • Code-based cryptography – McEliece public key encryption that uses error correcting codes to hide contents of a message during transmission on an unreliable channel. The message sender deliberately adds an error in order to protect the contents of a message against an eavesdropper. The public key of the receiver is a generator matrix of receiver’s code. the sender encrypts the message by converting into a code word and by adding a secret error vector. The receiver decodes the corrupted codeword and obtains the message. McEliece was using binary Goppa codes and requires key sizes of about 4 Mb in order to assure quantum-resistant security.  Niederreiter proposed alternative approach instead introducing an error to the code word before transmission, it encodes the plaintext as error – a bit string and instead of using a generator matrix, a parity-check matrix is used as a public-key. The sender encodes the plain text as a bit string with weight w and computes the syndrome. The receiver uses a syndrome-decoding algorithm to recover the original error vector. In addition to public-key cryptosystems, Niederreiter also proposed signature schemes, hash functions, and random-number generators.
  • Hash-based cryptography – consider more mature, safer from impacts of Grover’s algorithm and reliable for construction of PQC schemes. Lamport, Diffie, and Winternitz demonstrated how to convert Merkle’s one-time signature scheme into a many-time signature scheme. Hash-based signature scheme is currently in the IETF standardization. Although there is a security issue with statefulness requiring the re-usage of private key material and during backups (data loss). The new variants SPHINCS and XMSS are considered quantum-resistant, which allows stateless schemes with larger signature sizes.
  • Supersingular elliptic-curve Isogeny cryptography – Newest addition to PQC, using difficulty in finding isogenies between supersingular elliptic curves. They have a similar structure to classical Diffie-Hellman and ECDH approaches.

Hybrid approaches

We do know it is premature to use PQC algorithms as it is not meaningfully tested and verified by global cryptographic community like traditional public-key and symmetric key algorithms. That said, we can always consider hybrid approaches that can fulfill the current state of security requirements and potentially assure quantum-resistance in the future. NIST does approve hybrid approaches (Hybrid ciphersuites involving one traditional public-key algorithm and one PQC algorithm) which allows to retain the security of traditional algorithms and can also meet quantum-safe requirements. NIST recently approved a digital signature scheme solution that allows signing a message twice – first with PQC algorithm based on stateful Hash-based Signatures and then using signature scheme validated by NIST. For Key exchange, both parties would establish two shared secrets, one using NIST approved current scheme (ECDH) and second with a PQC scheme like NewHope (based on Lattice). This means, in a typical TLS scenario, the TLS handshake uses two key exchange algorithms like ECDH (Traditional) and NewHope (PQC).

A hybrid approach with NewHope Algorithm – https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html

Recently, Google demonstrated a hybrid approach for TLS between Chrome browser and Google servers using two key exchange algorithms including standard ECDH and NewHope (a Lattice algorithm).

PQC approaches are fast evolving in terms of design, cryptanalysis, and implementation, sooner or later we should able to adopt then without making major changes to the applications.  Let’s stay tuned!

 

HSM based Hybrid Approach

Currently no HSM providers claim to support PQC ciphers. As I noted from one of the engineers from HSM vendor (Ultimaco) who claimed that using Crypto software kits that supports HSM can be used as part of the hybrid approach where a PQC based key exchange like NewHope can be used to establish a secret symmetric key to encrypt communication between the HSM and the application. Combine PQC based digital signature and HSM facilitated signature algorithm to sign hashes.  Microsoft recently introduced an implementation of PICNIC signature scheme for quantum-safe digital signature operations that build on a zero-knowledge proof system (based on multi-party computation protocol) and using primitives like hash functions, and block ciphers. Microsoft released a reference implementation of PICNIC on github (which can be used on select HSMs like Ultimaco). Recently Microsoft also submitted PICNIC as a candidate for NIST PQC signature scheme.

References:
  1. Topics in Cryptology – CT-RSA 2018 (The Cryptographer’s Track at the RSA Conference 2018) Springer Press.
  2. M.Grassl, B.Langenberg, M.Roetteer and R.Steinwandt – Applying Grover’s Algorithm to AES: Quantum Resource Estimates – PQCrypto 2016
  3. M. Ajtai – Generating hard instances of lattice problems (ACM)
  4. M Waidner, R. Niederhagen, T.Grotker, P. Reinelt – Post-Quantum Crypto
  5. D.Stebila and M. Mosca – Post-Quantum Key Exchange for the Internet and the Open Quantum Safe Project
  6. Micorosoft PICNIC reference implementation – https://github.com/Microsoft/Picnic.

TLS 1.3 Approved – Let’s get ready for much faster and secure HTTPS connections !

It’s been few years now, the IETF’s TLS 1.3 standardization effort always looked like a never-ending story. Glad to note the wait is over. After 28 drafts for review, last week IETF finally ratified TLS 1.3 as an approved standard.  Indeed, TLS 1.3 promises significantly faster SSL/TLS performance and a much secure communication protocol standard ever before!  It also brings a radical change to its predecessor TLS 1.2 protocol currently surviving with many known risks.  

TLS 1.3 fundamentally changed the existing TLSv1.2 protocol with several new additions and changes to  processes:

  • Expected to speed-up atleast 2X by establishing TLS handshake in the first round-trip (existing TLS 1.2 requires 2 more roundtrips).  The client can send Key material and encrypted payload without server feedback. All handshake messages after ServerHello will be now encrypted.
  • No compression and renegotiation.
  • Deprecates legacy public-key encryption (Static RSA Key transport and Diffie-Hellman) and hashing (MD5 and SHA-1) algorithms. 
  • Will use Elliptic-curve algorithms as base (ECDHE) instead of RSA Key transport (known to have issues with ‘Forward Secrecy’).
  • New signature algorithms ed25519 and ed448, uses HMAC and also extended support for ChaCha20, Poly1305, Ed25519, x448 and x25519.  All public-key encryption mechanisms used will ensure forward secrecy.
  • HMAC based Extract and Expand Key derivative function (HKDF)
  • Enforces “Forward Secrecy” assuring past session stay secure.  “Deep packet inspection” and passive monitoring on TLS sessions will no longer effective and make sense.
  • Introduced TLS False Start and Zero Round-Trip-Time (0-RTT) resumption will significantly help speed up connections especially with previously established handshakes or frequently connected Web sites.  This will boost the performance of Mobile apps and SaaS Cloud applications.
  • No force downgrade options available, during use it resists tampering and it cannot force peers to negotiate different cipher suite parameters.

and more…

Most browsers (Firefox, Chrome) already provide TLS 1.3 implementation (based on earlier IETF drafts). OpenSSL 1.1.1 has an alpha version of TLS 1.3 as well. Considering the performance and security,  TLS 1.3 will trigger faster adoption in all industry especially among the Mobile and SaaS Cloud providers!  Undoubtedly TLS 1.3 is very promising and compelling for secure Web communication.. let’s stay tuned.

References:

Encryption and Key Management in AWS – Comparing KMS vs. CloudHSM

A secure data protection using encryption depends more on secure key management processes than the encryption itself. Although enabling encryption looks quite trivial, managing the underlying Key management lifecycle processes and handling the associated cryptographic operations always been a daunting challenge! The challenges are too many till we really know..beginning from key generation and issuance, key ownership, key usage, privileged users of keys,  key access controls, least privileges, separation of duties, key rotation, key distribution, key expiry and destroyal and much more, depending on the risk profile you deal with! So don’t get fooled by how easy is encryption.  If everything worked out well as expected in the first place, just be happy! When things go wrong…be prepared for the worst happens..yes, you may lose the keys or lose access to encrypted data forever. How much are those keys well protected and securely persisted in the storage? the question leads to the Key store stakeholders, security of Key Encrypting Keys (KMK) and the Master Key of the key store itself. On the other side, you may find the abuses of a single key for everything, not just data encryption… from authentication, key encryption, digital signatures and so on. Believe it or not, many of us will realize the actual risks of key management and its potential consequences till we really encounter the problems associated with sensitivity of data exposure in terms of its confidentiality, integrity, privacy or compliance breach. Mismanagement and poor choice of encryption algorithms and related key management practices can ultimately compromise resulting encryption make no sense at all. When we encounter such problems, the afterthought options for post-mortem solutions are usually very limited. Adopting to Hardware Security Modules (HSM) has always been helpful to address several known risks of key management lifecycle processes and securely handling cryptographic operations.  That said, I shouldn’t forget to mention HSMs adds significant costs for encryption.

For the past 6 months also, I had been reviewing and testing encryption and key management options in AWS and it has been a pleasant adventure of understanding who owns the risks and controls. Managing encryption and key management in the AWS Cloud looks like a piece of cake till we understand the different options and its risk profiles. AWS Key Management Service (KMS) and AWS CloudHSM are the two options available for handling key management lifecycle process and supporting cryptographic operations.

KMS is an AWS managed service for Cloud consumers that allows handling select key management lifecycle processes and facilitates symmetric key operations for encrypting data particularly encrypting AWS Block Volumes (EBS), Simple Storage Service (S3) Buckets, encrypting Redshift Data warehouses, Databases residing in Relational Database Service (RDS), Data stores in Elastic MapReduce (EMR).   AWS KMS facilitates a Two-tiered envelope encryption operational model using Master Key (Customer Managed Key or AWS Managed Key – Default Key) and Data Keys. Master Keys encrypt data keys used for encryption. Master keys supposedly do not leave AWS KMS service (evidently AWS owned HSM). AWS relies on IAM services to define and assign policies for users and roles to create, manage, use and delete keys. As KMS is a managed service,  it manages the rotation of keys and assures highly available key storage available for management via AWS IAM service and Key auditing through AWS CloudTrail services.  KMS also provides Bring-Your-Own-Key (BYOK) option where customers can import AES-256-XTS keys in PKCS#1 standard format.

Alternatively,  CloudHSM is a customer-owned and managed Hardware Security Module (HSM) that allows using a dedicated single tenant HSM for a Cloud customer. AWS owns the responsibility of provisioning the HSM in customer’s VPC environment in AWS. While the use of HSM certainly allows a customer to have full control over the keys residing in the HSM, its key management lifecycle processes,  cryptographic operations and even accelerating them for performance.

For a better comparison of AWS KMS and AWS CloudHSM, here is my review of both services in terms of security controls and features (I added pricing too ~Early 2018)

Comparative Study - AWS Key Management Service (KMS) vs. AWS CloudHSM

Comparative Study – AWS Key Management Service (KMS) vs. AWS CloudHSM

Many years ago… during my good old days of Sun Microsystems, I was attending a casual meeting with Dr.Whitfield Diffie (Co-Father of Public Key Cryptography) he said: “An amateur hacker attacks the encryption to hack the resource whereas the experienced hacker just looks for the ways to get access to the key itself”.

So save the keys carefully and manage them wisely 🙂

References:

AWS CloudHSM

AWS Key Management Service Whitepaper

 

Exploring Hyperledger Fabric v1 – Supply chain demo (Tuna fish shipments)!

Let’s begin with some fundamentals! Adopting Blockchain helps to establish a “System of Proof”, where we can verify the complete historical record of transactions right from the genesis of the blockchain – which is immutable, unbreakable meaning that it cannot be changed, moved, or deleted. The blockchain integrity is protected by Cryptographic hashing, key signature, and timestamping mechanisms.  Every change scheduled and addressed on the blockchain is only appended as a new block to the blockchain in order.

In a supply chain scenario, blockchain brings assurance of integrity while maintaining transparency throughout the process. Linux Foundation has recently introduced a short online course on Hyperledger which is available for free. To showcase this demo, I used the Hyperledger Fabric v1 supply chain example (establish legal Tuna fish shipments) provided. The demo presents how we can transparently manage and regulate tuna fish shipments right from the source and till reaching the end customer while having able to validate the legal source throughout the process and avoid illegal and unrecorded sources. This example uses three actors in the supply chain (Fisherman, Restaurant/Consumer, and Regulator) and we should able to keep track of the record using the distributed ledger to record all shipments right from the legal source of fishing from fisherman and till the restaurant where it ends up. Regulators can query the ledger to verify, and view details of all entries in real-time. Using Hyperledger establishes permissioned private blockchain, where only registered and approved personnel can join the blockchain network using MSPs.

Let’s explore the supply chain scenario using Hyperledger Fabric v1 framework components in terms of its role and relevance on the network. The core components are as follows:

  • Shared Permissioned Ledger contains the current state of all records right from the beginning of the network and the series of transaction invocations. As ledger is an append-only system of records and serves as a single source of all transactions and it is made available to all peers on the network.
  • All Peers commit blocks and maintain a copy of the ledger. Two types of peers: Endorsers and Committer peers. Endorsers simulate and endorse transactions. Committers verify endorsements and validate transaction results.
  • Channels establish transaction visibility to all the members of the network.  Each channel maintains independent chain of transaction blocks containing only transactions specific to that channel.
  • Chaincode  encapsulates the asset definitions and the business logic (or transactions) for creating and modifying (CRUD) which performs transaction invocations on those assets
  • Orderer accepts endorsed transactions, orders them into a block, and delivers the blocks to the committing peers.
  • MSP manage user IDs, and authenticate all the participants in the network.

The demo application made available is written using Hyperledger Node.js SDK.

Try it yourself:

  1.  Have your Ubuntu instance (atleast 8Gb memory and 16Gb storage) up and running. Make sure you installed the following:
    • Docker.io and Docker Compose (docker.io and docker-compose)
  2. Download and install the latest Hyperledger Docker VMs binaries – refer to the following URL:
https://hyperledger-fabric.readthedocs.io/en/latest/samples.html#binaries

I used the following URL (please note this changes upon new builds so make sure to download from the latest URL obtained from the link above:

$ curl -sSL https://goo.gl/Q3YRTi | bash

3.  Verify install by running ‘$docker images’ (refer output, should look like this)

4. Download the Linux Foundation (Hyperledger Fabric v1 Education sample repository) and then change the directory to ‘tuna-app‘.

$ git clone https://github.com/hyperledger/education.git
$ cd education/LFS171x/fabric-material/tuna-app

5. Start the Hyperledger Fabric network using the following command:

$ ./startFabric.sh

A typical output would look like this:

6. As the application is written using Node.js SDK, it is critical to Node.js binaries installed and available for use. The application also has dependencies to Go SDK.  So let’s install beginning with Go language and Node.js binaries.

 $ apt install golang-go
 
 $ go version
 
 $ sudo bash -c "cat >/etc/apt/sources.list.d/nodesource.list" <<EOL
 >  deb https://deb.nodesource.com/node_6.x xenial main
 >  deb-src https://deb.nodesource.com/node_6.x xenial main
 >  EOL
 
 $ curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | sudo apt-key add -
 
 $ apt update
 
 $ apt install nodejs
 
 $ apt install npm
 
 $ node --version && npm --version
 
 $ npm install
 
 $ npm rebuild

Discard warnings, not errors! If you encounter errors you may want to start from the beginning.

  1. Finally, register the Admin and User components of our network, and then start the client application using the following commands:
$ node registerAdmin.js

You should be seeing similar output:

$ node registerUser.js

You should see the following output:

8. Now, start the server (server.js) and then try accessing client at the servehost_address:8000.  Make sure port 8000 is accessible from the host. The user can interact with the Web application that enables users to query and update a ledger. Under the hood, the application using the SDK sends the endorsed proposal (automatically) to the Solo ordering service where the order is packaged into a block then broadcasted to all the peers on the network.

$ nohup node server.js &

Try accessing the Web application client using the browser (http://IPaddress:8000/:

2) Try “Query” All or “Query a Specific Tuna Catch”:

You should able to “Create a Tuna Record” and also Change Tuna Holder” and so on.

THAT’S ALL FOLKS!

Further References:

https://hyperledger-fabric.readthedocs.io/en/release/

https://courses.edx.org/courses/course-v1:LinuxFoundationX+LFS171x+3T2017/course/

Exploring Hyperledger Fabric v1 – Building your first network (BYFN)

It was quite easy ! Building a private blockchain using Hyperledger Fabric looked relatively simple when I tried building a couple of demos (similar to my Private Ethereum experience)! Lately, I don’t have the luxury of my big old machines at Oracle so I used my own free-tier account on AWS (Thank you AWS). I was able to quickly put up an Ubuntu instance with 2Gb RAM and 8Gb of storage. For doing the BYFN (Building your first network) demo (from Hyperledger Fabric Samples), I thought it should be good enough to run few docker VMs running Fabric peers representing two organizations using a Solo orderer (not using Kafka).

Putting altogether (Here is my recipe):

  1. Ubuntu instance details:

2.  Before you begin installing Hyperledger, make sure your Ubuntu instance is installed with ‘curl’,  ‘docker.io’ and ‘docker-compose’ binaries.  To download and install the latest Hyperledger Docker VMs binaries refer to the following URL:

https://hyperledger-fabric.readthedocs.io/en/latest/samples.html#binaries

I used the following URL:

$ curl -sSL https://goo.gl/Q3YRTi | bash

3. To verify install, try running ‘docker images’ :

Also, make sure your current directory and the ~/bin directory is added to $PATH environment variable.  For example, you would add. #export PATH=/home/ubuntu/hyperledger/bin:$PATH

4. Now download the Hyperledger Fabric samples from Github, by running the following commands:

$ git clone -b master https://github.com/hyperledger/fabric-samples.git 
$ cd fabric-samples

5. Now, move to the sub-directory “first-network” and run the following command:

$ cd fabric-samples/first-network
$ ./byfn -m generate

You should be seeing the following output, which will generate the Certificates for the peers, creates the Orderer genesis block, creates a single channel (mychannel) and the anchor peers representing two organization (Org1 and Org 2).

  1. Now, bring up your “first-network” and run the following commands:
$ ./byfn.sh -m up

You should see the following output (I truncated them into multiple screenshots for ease of understanding of what is going on under the hood).

a) Creating the Orgs, Solo Orderer, Peers and the channel (mychannel)

b) Adding Peers to the Channel

c) Identifying the Anchor peers for each org

d) Installation and Instantiation of Chaincode on the peers

e) Querying chaincode and Invoking transactions.

7.  To shutdown your “first-network” and to delete all the artifacts, including stopping the docker VMs,  deleting the chaincode images from the docker Registry removing the crypto material (certs) of the peers. By running the following command:

$ ./byfn.sh -m down

To summarize, we executed the “First Network” script, beginning with generating the certificate artifacts for the organization, created the default channel that connects the orderer with the peers,  created orderer genesis block and then brought up the private blockchain network using the Solo orderer that makes use of two organizations joining peers from both organization to a channel, deploying, and instantiating the chaincode, and execute query and invoke transactions against the chaincode.

This exercise only helps to understand the Hyperledge Fabric components and the simple steps to show how we can quickly spin up and bring down a Hyperledger Fabric network using sample chaincode (chaincode_example02).  You should able to modify the scripts and tweak to run other samples (like fabcar under fabric-samples directory).