Yesterday, Dropbox has finally pulled the plug on its Linux support. Let us keep a silent minute for the deceased.
Of course, Dropbox does see this differently, but their explanation that ext4 is the only reasonable file system (and be aware, not in all combination and encryption), and also the only that supports extended attributes, is so ridiculously stupid that it is vain to even discuss it.
Yes I know, as OSS enthusiast I should (and I am) using NextCloud. But integration-wise Dropbox is still the best, and many of my applications only offer Dropbox integration (or, God forbid, OneDrive integration).
I have designed two algorithms to maximize profits for an investor who is planning to invest in an ICO. This can be applied to a scenario where the investor is planning to invest through an exchange company that has a dashboard interface for the user.
The primary motivation behind this algorithm is the graph analysis concept I have read in my high school calculus. I will now explain the algorithm with a simple graph.
Let us assume there are 2 companies named “1” and “2”. This is the setup I would like to have on the dashboard. Let the horizontal axis represent Time, and the vertical axis represents Value. There are three time periods on this graph. They are:
1) Period till ‘A’.
2) The period from ‘A’ to ‘B’.
3) Period Beyond ‘B’.
Now let us say we have a customer “John” who has created an account in our company. Now John wants to invest his money. Let us say for example that he is planning to invest 10$. My whole aim is for John to have maximum profits for his investment. So we will first suggest John to invest his 10$ to invest in company “2”. Now we will have the forecasting done at the backend and we will have the comparison of the values. There will be a time when this trend would reverse. I am labeling that point ‘A’. Now, we will send an email or a text alert to John advising him to invest in company “1” now. Then we will wait till the trend reverses once again (Point ‘B’). We will now send an alert so that John can invest his money in company “2” now.
We can have this comparison done on the dashboard. With the sentiment analysis of the ICO’s hype on its twitter page and the data given to us by the ICO, we can have this done. Text alerts and email services can be automated as and when we have a trend change. In my explanation, I have decided to have the comparison of 2 ICOs. But we can give the user/investor the choice to select the ICO based on their interests.
If implemented properly, an investor will always have profits. Since people are investing in the ICO, the ICO’s market value will increase and it will gain popularity. So, we are ensuring that the primary needs of both the parties are met.
The weighted knapsack problem is the primary motivation behind this algorithm. I will give a small example of the weighted knapsack problem. Let us say we have to fill a knapsack that can fit a maximum of 10kgs weight with the items. Each item has a value. So we need to fill the items in such a way that we have the maximum combined value of them while not crossing the 10kg limit. I will now explain my algorithm.
In this algorithm, let us assume that we have an investor. We will suggest to him a way to invest so that he will get the maximum return on investment.
Now let us define an equation,
A*x + B*y + C*z = I
Where [A, B, C] are the number of tokens of three ICOs respectively that will be purchased by the investor and [x, y, z] are the values of these tokens. Let ‘I’ be the total money the investor is planning to invest.
Let A*X + B*Y + C*Z = O
Where [X, Y, Z] are the appreciated values of the tokens and “O” is the return on investment.
Now, we will suggest the above combination in such a way that when the price of the three tokens appreciates we get the maximum return on investment.
We can first give the investor the option to choose the number of ICOs he is planning to invest in. Once we have that number as the number, then we can give him the optimum combination for investing. In case the investor has any doubts regarding the combination, then we can get him in contact with the investment advisor of our company to help him decide. This combination can be selected with the help of the ICO data from our database. Our database must have the predicted future price of the tokens. This prediction can be achieved by the same way as I have mentioned in the implementation of my first algorithm i.e with the sentiment analysis of the ICO’s hype on its Twitter page and the data given to us by the ICO.
Thank you for reading my article.
If you have any questions, please feel free to send me an email. You can also contact me via Linkedin.
For many high-assurance applications such as TLS traffic, medical databases, and blockchains, forward secrecy is absolutely essential. It is not sufficient to prevent an attacker from immediately decrypting sensitive information. Here the threat model encompasses situations where the adversary may dedicate many years to the decryption of ciphertexts after their collection. One potential way forward secrecy might be broken is that a combination of increased computing power and number-theoretic breakthroughs make attacking current cryptography tractable. However, unless someone finds a polynomial time algorithm for factoring large integers, this risk is minimal for current best practices. We should be more concerned about the successful development of a quantum computer, since such a breakthrough would render most of the cryptography we use today insecure.
Quantum Computing Primer
Quantum computers are not just massively parallel classical computers. It is often thought that since a quantum bit can occupy both 0 and 1 at the same time, then an n-bit quantum computer can be in 2n states simultaneously and therefore compute NP-complete problems extremely fast. This is not the case, since measuring a quantum state destroys much of the original information. For example, a quantum system has complete knowledge of both an object’s momentum and location, but any measurement of momentum will destroy information about location and vice versa. This is known as the Heisenberg uncertainty principle. Therefore, successful quantum algorithms consist of a series of transformations of quantum bits such that, at the end of the computation, measuring the state of the system will not destroy the needed information. As a matter of fact, it has been shown that there cannot exist a quantum algorithm that simultaneously attempts all solutions to some NP-complete problem and outputs a correct input. In other words, any quantum algorithm for solving hard classical problems must exploit the specific structure of the problem at hand. Today, there are two such algorithms that can be used in cryptanalysis.
The ability to quickly factor large numbers would break both RSA and discrete log-based cryptography. The fastest algorithm for integer factorization is the general number field sieve, which runs in sub-exponential time. However, in 1994 Peter Shor developed a quantum algorithm (Shor’s algorithm) for integer factorization that runs in polynomial time, and therefore would be able to break any RSA or discrete log-based cryptosystem (including those using elliptic curves). This implies that all widely used public key cryptography would be insecure if someone were to build a quantum computer.
The second is Grover’s algorithm, which is able to invert functions in O(√n) time. This algorithm would reduce the security of symmetric key cryptography by a root factor, so AES-256 would only offer 128-bits of security. Similarly, finding a pre-image of a 256-bit hash function would only take 2128 time. Since increasing the security of a hash function or AES by a factor of two is not very burdensome, Grover’s algorithm does not pose a serious threat to symmetric cryptography. Furthermore, none of the pseudorandom number generators suggested for cryptographic use would be affected by the invention of a quantum computer, other than perhaps the O(√n) factor incurred by Grover’s algorithm.
Types of Post-Quantum Algorithms
Post-quantum cryptography is the study of cryptosystems which can be run on a classical computer, but are secure even if an adversary possesses a quantum computer. Recently, NIST initiated a process for standardizing post-quantum cryptography and is currently reviewing first-round submissions. The most promising of these submissions included cryptosystems based on lattices, isogenies, hash functions, and codes.
Before diving more deeply into each class of submissions, we briefly summarize the tradeoffs inherent in each type of cryptosystem with comparisons to current (not post-quantum) elliptic-curve cryptography. Note that codes and isogenies are capable of producing digital signatures, but no such schemes were submitted to NIST.
Table 1: Comparison of classical ECC vs post-quantum schemes submitted to NIST
In terms of security proofs, none of the above cryptosystems reduce to NP-hard (or NP-complete) problems. In the case of lattices and codes, these cryptosystems are based on slight modifications of NP-hard problems. Hash-based constructions rely on the existence of good hash functions and make no other cryptographic assumptions. Finally, isogeny-based cryptography is based on a problem that is conjectured to be hard, but is not similar to an NP-hard problem or prior cryptographic assumption. It’s worth mentioning, however, that just as we cannot prove any classical algorithm is not breakable in polynomial time (since P could equal NP), it could be the case that problems thought to be difficult for quantum computers might not be. Furthermore, a cryptosystem not reducing to some NP-hard or complete problem shouldn’t be a mark against it, per se, since integer factorization and the discrete log problem are not believed to be NP-complete.
Of all the approaches to post-quantum cryptography, lattices are the most actively studied and the most flexible. They have strong security reductions and are capable of key exchanges, digital signatures, and far more sophisticated constructions like fully homomorphic encryption. Despite the extremely complex math needed in both optimizations and security proofs for lattice cryptosystems, the foundational ideas only require basic linear algebra. Suppose you have a system of linear equations of the form
Solving for x is a classic linear algebra problem that can be solved quickly using Gaussian elimination. Another way to think about this is that we have a mystery function,
where given a vector a, we see the result of ax, without knowing x. After querying this function enough times we can learn f in a short amount of time (by solving the system of equations above). This way we can reframe a linear algebra problem as a machine learning problem.
Now, suppose we introduce a small amount of noise to our function, so that after multiplying x and a, we add an error term e and reduce the whole thing modulo a (medium-sized) prime q. Then our noisy mystery function looks like
Learning this noisy mystery function has been mathematically proven to be extremely difficult. The intuition is that at each step in the Gaussian elimination procedure we used in the non-noisy case, the error term gets bigger and bigger until it eclipses all useful information about the function. In the cryptographic literature this is known as the Learning With Errors problem (LWE).
The reason cryptography based on LWE gets called lattice-based cryptography is because the proof that LWE is hard relies on the fact that finding the shortest vector in something called a lattice is known to be NP-Hard. We won’t go into the mathematics of lattices in much depth here, but one can think of lattices as a tiling of n-dimensional space
Lattices are represented by coordinate vectors. In the example above, any point in the lattice can be reached by combining e1, e2, and e3 (via normal vector addition). The shortest vector problem (SVP) says: given a lattice, find the element whose length as a vector is shortest. The intuitive reason this is difficult is because not all coordinate systems for a given lattice are equally easy to work with. In the above example, we could have instead represented the lattice with three coordinate vectors that were extremely long and close together, which makes finding vectors close to the origin more difficult. As a matter of fact, there is a canonical way to find the “worst possible” representation of a lattice. When using such a representation, the shortest vector problem is known to be NP-hard.
Before getting into how to use LWE to make quantum-resistant cryptography, we should point out that LWE itself is not NP-Hard. Instead of reducing directly to SVP, it reduces to an approximation of SVP that is actually conjectured to not be NP-Hard. Nonetheless, there is currently no polynomial (or subexponential) algorithm for solving LWE.
Now let’s use the LWE problem to create an actual cryptosystem. The simplest scheme was created by Oded Regev in his original paper proving the hardness of the LWE problem. Here, the secret key is an n-dimensional vector with integer entries mod q, i.e. the LWE secret mentioned above. The public key is the matrix A from the previous discussion, along with a vector of outputs from the LWE function
An important property of this public key is that when it’s multiplied by the vector (-sk,1), we get back the error term, which is roughly 0.
To encrypt a bit of information m, we take the sum of random columns of A and encode m in the last coordinate of the result by adding 0 if m is 0 and q/2 if m is 1. In other words, we pick a random vector x of 0s or 1s, and compute
Intuitively, we’ve just evaluated the LWE function (which we know is hard to break) and encoded our bit in the output of this function.
Decryption works because knowing the LWE secret will allow the recipient to get back the message, plus a small error term
When the error distribution is chosen correctly, it will never distort the message by more than q/4. The recipient can test whether the output is closer to 0 or q/2 mod q and decode the bit accordingly.
A major problem with this system is that it has very large keys. To encrypt just one bit of information requires public keys with size n2 in the security parameter. However, an appealing aspect of lattice cryptosystems is that they are extremely fast.
Since Regev’s original paper there has been a massive body of work around lattice-based cryptosystems. A key breakthrough for improving their practicality was the development of Ring-LWE, which is a variant of the LWE problem where keys are represented by certain polynomials. This has led to a quadratic decrease in key sizes and sped up encryption and decryption to use only n*log(n) operations (using Fast Fourier techniques).
Among the many lattice-based cryptosystems being considered for the NIST PQC standard, two that are especially worth mentioning are the Crystals constructions, Kyber and Dilithium.
Kyber is a key-encapsulation mechanism (KEM) which follows a similar structure to the system outlined above, but uses some fancy algebraic number theory to get even better performance than Ring-LWE. Key sizes are approximately 1kb for reasonable security parameters (still big!) but encryption and decryption time is on the order of .075 ms. Considering this speed was achieved in software, the Kyber KEM seems promising for post-quantum key exchange.
Dilithium is a digital signature scheme based on similar techniques to Kyber. Its details are beyond the scope of this blog post but it’s worth mentioning that it too achieves quite good performance. Public key sizes are around 1kb and signatures are 2kb. It is also quite performant. On Skylake processors the average number of cycles required to compute a signature was around 2 million. Verification took 390,000 cycles on average.
The study of error correcting codes has a long history in the computer science literature dating back to the ground-breaking work of Richard Hamming and Claude Shannon. While we cannot even begin to scratch the surface of this deep field in a short blog post, we give a quick overview.
When communicating binary messages, errors can occur in the form of bit flips. Error-correcting codes provide the ability to withstand a certain number of bit flips at the expense of message compactness. For example, we could protect against single bit flips by encoding 0 as 000 and 1 as 111. That way the receiver can determine that 101 was actually a 111, or that 001 was a 0 by taking a majority vote of the three bits. This code cannot correct errors where two bits are flipped, though, since 111 turning into 001 would be decoded as 0.
The most prominent type of error-correcting codes are called linear codes, and can be represented by k x n matrices, where k is the length of the original messages and n is the length of the encoded message. In general, it is computationally difficult to decode messages without knowing the underlying linear code. This hardness underpins the security of the McEliece public key cryptosystem.
At a high level, the secret key in the McEliece system is a random code (represented as a matrix G) from a class of codes called Goppa codes. The public key is the matrix SGP where S is an invertible matrix with binary entries and P is a permutation. To encrypt a message m, the sender computes c = m(SGP) + e, where e is a random error vector with precisely the number of errors the code is able to correct. To decrypt, we compute cP-1 = mSG + eP-1 so that mS is a codeword of G that can correct the added error term e. The message can be easily recovered by computing mSS-1.
Like lattices, code-based cryptography suffers from the fact that keys are large matrices. Using the recommended security parameters, McEliece public keys are around 1 mb and private keys are 11 kb. There is currently ongoing work trying to use a special class of codes called quasi-cyclic moderate density parity-check codes that can be represented more succinctly than Goppa codes, but the security of these codes is less well studied than Goppa codes.
The field of elliptic-curve cryptography is somewhat notorious for using quite a bit of arcane math. Isogenies take this to a whole new level. In elliptic-curve cryptography we use a Diffie-Hellman type protocol to acquire a shared secret, but instead of raising group elements to a certain power, we walk through points on an elliptic curve. In isogeny-based cryptography, we again use a Diffie-Hellman type protocol but instead of walking through points on elliptic curve, we walk through a sequence of elliptic curves themselves.
An isogeny is a function that transforms one elliptic curve into another in such a way that the group structure of the first curve is reflected in the second. For those familiar with group theory, it is a group homomorphism with some added structure dealing with the geometry of each curve. When we restrict our attention to supersingular elliptic curves (which we won’t define here), each curve is guaranteed to have a fixed number of isogenies from it to other supersingular curves.
Now, consider the graph created by examining all the isogenies of this form from our starting curve, then all the isogenies from those curves, and so on. This graph turns out to be highly structured in the sense that if we take a random walk starting at our first curve, the probability of hitting a specific other curve is negligibly small (unless we take exponentially many steps). In math jargon, we say that the graph generated by examining all these isogenies is an expander graph (and also Ramanujan). This property of expansion is precisely what makes isogeny-based cryptography secure.
For the Supersingular Isogeny Diffie-Hellman (SIDH) scheme, secret keys are a chain of isogenies and public keys are curves. When Alice and Bob combine this information, they acquire curves that are different, but have the same j-invariant. It’s not so important for the purposes of cryptography what a j-invariant is, but rather that it is a number that can easily be computed by both Alice and Bob once they’ve completed the key exchange.
Isogeny-based cryptography has extremely small key sizes compared to other post-quantum schemes, using only 330 bytes for public keys. Unfortunately, of all the techniques discussed in this post, they are the slowest, taking between 11-13 ms for both key generation and shared secret computation. They do, however, support perfect forward secrecy, which is not something other post-quantum cryptosystems possess.
There are already many friendly introductions to hash-based signatures, so we keep our discussion of them fairly high-level. In short, hash signatures use inputs to a hash function as secret keys and outputs as public keys. These keys only work for one signature though, as the signature itself reveals parts of the secret key. This extreme inefficiency of hash-based signatures led to use of Merkle trees to reduce space consumption (yes, the same Merkle trees used in Bitcoin).
Unfortunately, it is not possible to construct a KEM or a public key encryption scheme out of hashes. Therefore hash-based signatures are not a full post-quantum cryptography solution. Furthermore, they are not space efficient; one of the more promising signature schemes, SPHINCS, produces signatures which are 41kb and public/private keys that are 1kb. On the other hand, hash-based schemes are extremely fast since they only require the computation of hash functions. They also have extremely strong security proofs, based solely on the assumption that there exist hash functions that are collision-resistant and preimage resistant. Since nothing suggests current widely used hash functions like SHA3 or BLAKE2 are vulnerable to these attacks, hash-based signatures are secure.
Post-quantum cryptography is an incredibly exciting area of research that has seen an immense amount of growth over the last decade. While the four types of cryptosystems described in this post have received lots of academic attention, none have been approved by NIST and as a result are not recommended for general use yet. Many of the schemes are not performant in their original form, and have been subject to various optimizations that may or may not affect security. Indeed, several attempts to use more space-efficient codes for the McEliece system have been shown to be insecure. As it stands, getting the best security from post-quantum cryptosystems requires a sacrifice of some amount of either space or time. Ring lattice-based cryptography is the most promising avenue of work in terms of flexibility (both signatures and KEM, also fully homomorphic encryption), but the assumptions that it is based on have only been studied intensely for several years. Right now, the safest bet is to use McEliece with Goppa codes since it has withstood several decades of cryptanalysis.
However, each use case is unique. If you think you might need post-quantum cryptography, get in touch with your friendly neighborhood cryptographer. Everyone else ought to wait until NIST has finished its standardization process.
1.Introduction 2. Shaking up the echo-chamber 3.Catching flies with vinegar 4. A bumpy ride 5. The road to mass adoption I. Added security for existing applications II. Public key authentication III. Verifiable certificates IV. Decentralized workflows V. Self-sovereign identities VII. Tokenization VIII. Trustless financial products 6. Conclusion
We have been servicing enterprise clients for the last four years. About 2 years ago we started focusing on decentralized workflows. This changed the way new clients approach us, in the most peculiar way.
“We want to do something with the blockchain. Can you please come up with a way, how we can apply it to our organization?”
There is no lack of great blockchain projects. But all of them seem to suffer from the same problem; little to no adoption. Can it be that the blockchain is a solution without a problem?
“The biggest waste of all is building a product that customers refused to use.” - Eric Ries, The Lean Startup
Before building yet another great, but unused, blockchain project, we felt we had to dive into this phenomenon. We’ve interviewed dozens of business leaders, corporate working groups and blockchain projects and found a remarkable misalignment in their perception of the blockchain technology.
Through this paper, we would like to share our findings from these interviews and propose our solution to get the blockchain from the current state towards being a fundamental part of our digital infrastructure.
After being the mysterious domain of insiders for almost a decade, multinationals and governments are gradually entering the blockchain arena. The spike of interest has created an atmosphere where the willingness to use blockchain technology is very high.
Despite an abundance of task forces and working groups, organizations are struggling to figure out where the actual strategic business value for them lies.
Although cryptocurrencies are growing in hundreds of billions USD in market cap, blockchain barely shows any significant, actively used real-world applications. In contrary to other emerging technologies, like artificial intelligence or the Internet of Things, it is unlikely that you as a consumer use a product powered by blockchain.
Naysayers argue that blockchain lacks the strategic value needed to deliver real-world use cases. Supposedly it is mainly a tool for swindlers and hustlers, relying on the greater fool theory. If organizations continue to fail in finding strategic applications for the blockchain, this skepticism can evaporate the current optimistic atmosphere, as blockchain technology itself may become the scapegoat for failed pilots and projects.
Shaking up the echo-chamber
It is simple to dismiss the critics as uninformed technophobes that lack vision. Back in 1995, similar arguments were made about the Internet, like the infamous Newsweek essay “Why the Internet Will Fail”. While there are many similarities with the Internet, there’s also a striking difference. In 1995 the internet enthusiasts greatly outnumbered the critics. Can we say the same today about blockchain?
If we’re honest, we must admit that most people that are interested in blockchain, merely care about speculation. The general public is reluctant to use blockchains like Bitcoin for its intended purpose and apathetic about the technology in general.
This is a far cry from the sound that resonates within the blockchain community. For us insiders, it may seem that we are fairly close to building a completely decentralized internet of value and soon can break free from trusted third parties.
From an outside view, our community closely resembles a classic echo chamber. Visions and beliefs are amplified through an endless amount of conferences and meetups that are both presented and visited by the same group of people.
Even though various organizations initiate blockchain teams within their structure, there is usually a yawning gap between these techies and top managers. Teams often find themselves inside the blockchain echo chamber, floating away from their peers to form lonely islands with little to no infiltration within the organization.
Looking at its current status, it’s very easy to dismiss blockchain as a fad. To counter the skepticism, we must all get out of our echo chamber and start delivering real and indisputable value. Not in a three year plan, but today.
Catching flies with vinegar
“When you make your peace with authority, you become authority.” - Jim Morrison
Bitcoin was created from an anarcho-capitalist philosophy. Fed up with governmental bodies and financial institutes ruling self-servingly from their ivory towers, the makers sought to replace the extant system with a new, better one based on principles of self-ownership, self-reliance and self-regulation.
At the inception of bitcoin in 2008, we were very near an economic crisis, with a very negative sentiment towards the financial industry. These days however, the mood has shifted. Both general public and governing institutions are more than content with the current economic situation and primarily concerned with protecting the current prosperity.
The political agenda we’ve inherited from Bitcoin is proving to be detrimental to adoption. Organizations that can drive the technology forward and are willing to embrace blockchain, are constantly being told they’ll be obsolete soon. Can we, as a community, start supporting these organizations instead? Remember, you catch more flies with honey than with vinegar.
A bumpy ride
The road to adoption of a new technology is always a bumpy ride.
In 1956 the Dartmouth Conference gave birth to the field of Artificial Intelligence (AI) and started the first AI revolution. This revolution didn’t last as the field failed to meet the expectations. After another rise and fall in 1980, it wasn’t till the mid-1990’s, when the Internet caused a technology boom, that AI managed to reach some world adoption.
We seem to be setting ourselves up for a similar ride for the blockchain. Are we truly willing to wait another 30 to 40 years to reach a level or maturity? Two decades ago, the field of AI stopped promising and started delivering. In 2018, it silently works in the background, powering many of the tools we all use on a daily basis. We must adopt a similar strategy for the blockchain if we want to prevent the pending collapse of our field.
The road to mass adoption
Where do we start with blockchain? Its short term value predominantly lays in reducing costs and increasing efficiency for incumbent organizations. Change both within an organization and within society often provokes resistance.
To counter this, we should initially focus on low impact solutions and progressively introduce more significant implementations.
I. Added security for existing applications through anchoring
When explaining the blockchain to organizational decision makers, the word “immutable” particularly strikes their imagination and interest. All decisions within organizations, from blue collar workers, to the upper management are done based on data. Unauthorized data manipulation can result in serious damages.
Of course, this immutability comes from distributing the data to a large number of independent parties. Data on a private blockchain run by a single organization is not nearly as immutable.
Second best is to anchor the data; writing a hash to a public blockchain. Data can still be manipulated, but this manipulation can easily be detected. This newly added layer of security doesn’t yet depend on system privilege levels.
Anchoring is a non-intrusive method that can be applied to existing applications with little effort. An increasing amount of software companies and integrators start recognizing and implementing this method of data validation. Due to its low-transformational characteristics, we expect anchoring to become a common practice soon.
II. Public key authentication
Passwords, the most common form of authentication, don’t meet the requirements organizations have today. A shift towards strong authentication using public keys has already begun. This shift started with mobile apps and with the adoption of the Web Authentication W3C standard, the web should shortly follow.
The blockchain can leverage this by providing decentralized authorization via a dynamic chain of trust, describing trust relationship between identities. It enables validation of company policies and authorization both by the organization internally as well as externally. The chain of trust simplifies business processes and mitigates some of the most costly cyber scams.
III. Verifiable certificates
Paper certificates have proven to be unreliable. For as little as €50, you can purchase a novelty degree online. And the other hand organizational licenses and certificates are shrouded in bureaucracy, making them unnecessarily hard to administer and validate.
Similar to the chain of trust, the blockchain makes it trivial to issue publicly verifiable certificates, which can be easily revoked if needed.
For institutions that already make their certificates public, this is a non-intrusive and low risk use case. It allows for increasing a progressive image by deploying the blockchain. These verifiable certificates could potentially root in society real quick, as the technology to realize these certificates is already available for real world application.
IV. Decentralized workflows
The digital revolution has had a tremendous effect on the optimization of internal business processes. When it comes to inter-organizational processes, we have to acknowledge that the changes are less drastic. At best paper forms and faxes are replaced by digital forms and e-mail, but underlying processes have hardly changed.
Corporations are reluctant to rely on external systems operated by a counterparty. With no single party in control of systems and data, decentralized workflows might be the answer.
Most corporations are still struggling with the concept of decentralized systems. Fortunately, influential parties in the shipping industry and, somewhat surprisingly, the EU government, are pushing forward the use of this technology.
Given that the pilot programs running today show costs savings and higher efficiency, we’re bound to see large-scale blockchain powered decentralized solutions in production in 2019.
V. Self-sovereign identities
While there is a fair amount of enthusiasm for self-sovereign identities (SSI), large-scale adoption seems far off. Rather than SSI systems, new high-profile federated authentication systems are being released on a regular basis.
However, while consortiums are trying to push their federated systems, integrators are not content and adoption is relatively low. Current identity services only seem to work on a national level. Some countries may have multiple services, where other countries have none.
In contrast, SSI systems are not limited by national borders, allowing anyone to participate. The rise in popularity of public key authentication may give SSI the edge it needs.
Tokenization has captured the imagination of the blockchain community, from tokenizing real estate and loyalty points to carbon emission rights.
One of the most anticipated applications of blockchain by businesses is the use of non-fungible tokens in a supply chain to battle counterfeit goods. While organizations are keen to solve this 2 trillion dollar issue, it will likely take years to see large-scale real world implementations.
Supply chain solutions face some major challenges; All parties in the supply chain need to participate, from the manufacturer to the end customers. Multiple, often legacy, software systems need to updated. Another issue is bulk packaging. Items need to be uniquely identifiable or already in their final packaging.
Unfortunately, most tokenization initiatives only focus on fulfilling an ideology, but fail to provide actual business value over existing non-blockchain solutions. We expect tokenization to be the subject of pilots for the coming years, followed by a major shake out, before we get to see these solutions have any impact on industries.
VII. Trustless financial products
While Bitcoin came to life as reaction to failing banks in a dysfunctioning financial system, fiat currencies survived the Great Recession mostly unscathed. As a result, FinTech solutions that don’t support the local currency are not usable by governments and businesses in a meaningful way.
For example, most employees won’t accept getting their salary in cryptocurrency. The currencies are much too volatile. A deposit in Ethereum has an enormous risk of decreasing in value while locked.
This is not the only hurdle. Immutable and public nature of smart contracts are arguably the biggest assets, but they’re also the source of many new challenges concerning scalability, privacy, security and legal context.
Given the huge potential impact on the financial system, activity of FinTech related blockchain projects will probably not diminish any time soon. This will continue to be experimental though. At best, we might see a blockchain based interbank infrastructure in the near future.
Other use cases for the blockchain are simply far less challenging both in technical and social context. General availability of trussless financial products should not be considered the start, but the conclusion of this path to mass adoption.
Trusting upon blockchain to handle day-to-day business processes is a necessary first step towards mass adoption and the creation of strategic business value. Organizations must be allowed to experience the benefits of the blockchain as supportive technology.
Recent failed proof of half-baked concepts aimed to transform industries have mostly led to disillusion for organizations and their decision makers. If we want to make blockchain a success — regardless of who we are — we should manage expectations and create a long-term realistic plan. Additionally we must stop scaring decision makers with stories of blockchain immediately disrupting their organization and jeopardizing thousands of jobs.
By pursuing baby steps in a long road to mass adoption, we can change the way organizations view the current state of blockchain, making them understand and embrace the numerous advantages the technology has to offer.
The blockchain is a sound concept and there is no doubt it will have its role eventually. However, we — the blockchain community — decide today, if we’ll look back at this period as the peak of just another hype cycle, or as our 1995.
It’s a year to the day since I left Japan after exactly 10 years and 1 day of living there.
It’s no secret that my departure was long overdue and that by the end I loathed so much about Japan and Japanese society that it was damaging my physical and emotional health continuing to live there.
I made a promise to myself that I wouldn’t write about leaving at any length until a year later to give the vitriol time to subside and hopefully be replaced by a more balanced and thoughtful critique…
This is long and not as coherent as it should be, but I hope it begins to explain something about why I left, for myself if not for anyone else.
When I first moved to Japan, I knew I understood nothing (or very little) of it as a culture but as time progressed and I learned the language and culture I slowly began to feel like I understood Japan and it’s people. Gradually though I slipped down the other side of the bell curve and increasingly felt like I understood less and less as so much of it made no sense to me.
I can speak a reasonable level of Japanese, I paid my taxes and never committed a crime on Japanese soil. I was polite and courteous unless given a reason not to be and did my best to respect the culture or at least the parts I thought were worthy of respect.
And here comes the first of my problems, almost universally in Japan if one criticizes or wishes to discuss some element of Japanese culture as a foreigner you are greeted with cries of “Why do you hate Japan?” and “If you don’t like it leave”.
There is precious little room for any discussion and more often than not an impasse is reached with the statement “This is Japan. This is how WE do things”.
Much has been written and discussed about the Japanese word for foreigner “外国人”, Gaikokujin – 外 – outside, 国 – country, 人 – person, and the the more commonly used “外人” , Gaijin – 外 – outside, 人 – person, which seems to me at least closer to “outsider”.
Most if not all Japanese people will tell you that the nuance (unlike ALL other nuances in Japanese) is not important and that “Gaijin” conveys none of the supposed slight I always felt from it.
Even if taken that way, to be referred to merely as “foreigner” and not say English, Spanish or Nigerian smacks of an “us & them” attitude frowned upon pretty much everywhere else in the developed world.
But in a country that values belonging to the group above all else, to be constantly referred to as “outsider” and told it means “foreigner” seems at best disingenuous.
The workplace presents another insidious annoyance born of Japan’s age based (as opposed to merit based) hierarchy and institutional xenophobia.
Everyone older and more senior must always be referred to by their surname and the honorific “San”, Suzuki San translating as Mr. (or Ms) Suzuki. This is a hard rule, no younger staff member would dare to call Mr (or Ms.) Suzuki by his/her first name when anyone else was present and if he/she did, the consequences would not be pretty.
At every place I ever worked, however, I was always referred to by my first name “Adrian” or at best “Adrian San” even by much younger staff members.
No big deal you might say, but again, in a country where nuance of language is so terribly important, it displays a lack of respect and more of that “not one of us” attitude that grates.
Japanese people often told me that Kabukicho (Tokyo’s Kabukicho district, famous mostly for its open sex trade and less open but equally famous drug dealers is a part of Shinjuku) was the most dangerous place in Japan and often asked if I wasn’t scared of the Yakuza (Japanese Mafia) who openly parade the area.
My reply was always a simple one, “The Yakuza and Kabukicho aren’t scary, if you don’t mess with them they don’t mess with you. The Police? Now that’s a different story. Hands down scariest thing in Japan is the Police & the Justice system”.
Uniformed and plain clothes policemen (93.2% men hence the gender biased noun) are everywhere, you see far more of them on a daily basis than you ever would here in the UK. (even in London I can count the number I’ve seen in a week on my fingers, the same fingers wouldn’t last me till lunchtime on a single day in Tokyo). They often cover their faces with the white paper masks Japanese people are so fond of and cover up their badge numbers if you try to write them down.
I was stopped many times just walking down a street minding my own business, often surrounded by 3 or more officers who stood within a few feet of me and did their best to be as intimidating as possible. This was usually for a “gaijin card check” (sic) and bag search.
Japanese Police can hold you for up to 30 days without charge and with no contact to the outside world. Amnesty International has stated that despite denials they believe beatings and sleep deprivation are common techniques used on people in police custody.
There are numerous cases of people being seriously injured or even dying whilst in Police custody.
No recording of Police or prosecutor questioning is required by law (except in some very limited cases).
Confessions can not later be retracted in court even if the accused claims they were forced.
Scary enough if you are Japanese but, all these powers in a police force that was ordered by 3 time former Tokyo Governor Mr. Ishihara to “regard all foreigners as suspicious”?
In 2009 Japan introduced its “Lay judge” system, a watered down version of a western jury, to try cases of serious crimes. However, the presiding judge can still override the lay judges verdict.
Studies have shown that the lay judges often give out harsher sentences than the prosecution is asking for.
There are no “Hate speech” laws and only in May 2016 the first national law to condemn the advocacy of hatred (“hate speech”) towards residents of overseas origin and their descendants was passed. (pay careful attention to that wording so as not to misunderstand that the law criminalizes hate speech itself because it doesn’t)
It is not that uncommon, even in Tokyo, to see shops, bars or other establishments with “No foreigners” signs on their doors, usually justified on the grounds that the proprietors only speak Japanese and foreigners would, therefore, be “troublesome” to accommodate.
The level of English in Japan is shockingly low, despite everyone studying it for 6 years at school. I honestly think it is kept deliberately low, people have access to the web but they can’t understand the English 90% of it is written in so there is no need for Chinese style censorship.
No country I’ve ever been to hides itself behind a screen of cultural elitism quite like Japan, which considering their architecture, traditional clothes, chopsticks, sushi and writing system are all Chinese is a bit rich.
They are masters of appropriation, I’ve been told straight faced on more than one occasion that pizza is Japanese.
It feels like The Borg made real.
There is at once the 建て前 (tatemae – official stance, public position) of “We feel inferior to westerners, whom we were the victims of in WWII” and the 本音 (hon-ne – true opinion, real intention) of “We are far superior to those filthy unsophisticated animals, we won’t even call ourselves Asians because we are better than those ethnic savages that we killed 20 million of in the same WWII”
The Japanese are true masters of this passive aggressive politeness, the only positive of which is that it’s way better than aggressive-aggressive non-politeness.
The famous Japanese saying “The nail that sticks up gets hammered down” nicely illustrates Japan’s shame based culture in which the stick is the fear of failure or losing face as opposed to the carrot of the (false) promise of success used in the West.
I’ll happily admit that as a white, middle-class male I’m not terribly used to having discrimination aimed at me and that these soft Hello Kitty forms of xenophobia pale into insignificance next to the much less cuddly versions people experience in the UK. But levels of unpleasantness is not a competition, discrimination is discrimination whatever form it takes.
I came slowly to understand over my time there that a great deal of this xenophobia is born of ignorance rather than malice.
In one of the most (on the surface at least) developed, literate countries in the world this ignorance is, at best, a poor excuse for attitudes that would be met with derision in more enlightened 21st-century countries.
That’s not to say that some nasty malice isn’t present because it most definitely is. I lost count of the times when, say, for example, politely pointing out to someone that there was a queue and they shouldn’t push in that the immediate response was a very angry “BAKA GAIJIN” (stupid foreigner).
I have been and know personally several other people who have been assaulted whilst the perpetrator used similarly racist language.
What we in the UK would call a hate crime.
It was somewhere I lived for 10 years, it was never my home, could never have been my home. Regardless of how good my Japanese language skills became or how much I had tried to become like them, I would never have been fully accepted, never been allowed in the club.
It led to a mild form of Stockholm syndrome just to survive everyday life and some of its effects still linger.
Now back in my country of birth, I’m often asked “How was living in Tokyo, it must have been amazing!” only to be greeted with confused and disappointed faces when I reply how much I hated it. “Why?” they ask and it is difficult to answer because it is not one big thing that I can point to, but rather a “death by a thousand cuts” that I have tried (as much for my own mental health as for anyone to read) to enumerate here. There are things I have left out, forgotten, blanked from my memory or would take too much explanation and long recounting of anecdotes to bother with.
Japan offered me many opportunities and I dearly miss my friends there but the country and I had a deeply toxic relationship and I made a pact with myself the day I left that I would never set foot there again in this life.
they buried the lede: "I’ll happily admit that as a white, middle-class male I’m not terribly used to having discrimination aimed at me and that these soft Hello Kitty forms of xenophobia pale into insignificance next to the much less cuddly versions people experience in the UK."