What is SHA3? SHA3 is the next-gen secure hashing algorithm recommended by NIST. It's the first departure from the Merkle-Damgård (MD) family of hashing algorithms as seen in SHA1 and SHA2. SHA2 is currently the prevailing hash algorithm used today but SHA3 is used in newer technology such as the Ethereum blockchain.

What is Motoko? From the Internet Computer wiki:

The Motoko programming language is a new, modern and type safe language for developers who want to build the next generation of distributed applications to run on the Internet Computer blockchain network.

Okay, so why SHA3 in Motoko you ask? Well it started with this bounty. The whole project sounded kind of neat (minus the "Internet Computer" name but I'm no marketing guy) so I was in. I also wanted to try the exercise with pair programming. For a reason why the designers decided to create a whole new language, see this blog.

The plan of attack:

  1. Find a buddy crazy enough to try this with pair programming
  2. Learn Motoko and some Internet Computer architecture around it
  3. Get a better understanding of SHA3
  4. Share knowledge between us
  5. Start pair programming: one driver, one researcher

Our solutions is posted on Github.

The project took about 20 hours total including some research. About half the research was not counted in the total time used. Time away from the computer, not thinking about the project, was also not included.

Learning Motoko

Overall, Motoko was pretty easy to learn. The language is mostly straightforward due to its simplicity and overlap with other languages we knew. Here are some of the resources we used:

  1. Internet Computer developer docs
  2. Motoko Language Tour
  3. Motoko base on Github

Kudos to the unsung heros of documentation writing. They make ramping up on a project so much easier!

I found the Concise overview of Motoko particularly useful as many of the concepts were familiar from other languages. Our approach here was to go through each section step by step, stopping to discuss anything either of us were unclear about. One thing that caught me were lambdas as seen here:

func applyNTimes<T>(n : Int, x : T, f : T -> ()) {

I expected the parameter f : T -> () to be more like f : (T) -> (). Maybe that was a me thing. After noticing some code blocks were runnable, running the code cleared that up quickly.

During implementation, two additional things caught me. First was the use of = vs :=. The latter being for mutable variable assignment. All the other usual suspects like += and |= were working as expected so it took me a moment to realize that = is an outlier in assigning to immutable variables.

Another peculiarity was this warning:

warning [M0155], operator may trap for inferred type

On lines like this:

for (i in Iter.range(0, dsize-1:Nat)) {

The issue was around the use of dsize-1. A thread on Stack Overflow suggests it's because Nat types are non-negative; however, I think this is more about the type confusion around -1. A DFINITY thread provides a better solution and that doesn't seem to be connected to underflow checking. This is supported by the line:

private let rsize: Nat = (bsize - cap)/8;

which throws no such warning, despite bsize and cap both being Nat types.

SHA3 Overview

The project was particularly interesting because it gave me a chance to look into SHA3 which uses a sponge construction that differs from the usual MD construction used in SHA1 and SHA2. The idea here is that there's an absorb phase where data is processed into the internal state, followed by a squeeze phase where a bunch of pseudo-random data can be squeezed out.

During the SHA3 NIST competition, Keccak (pronounced ket-chak) was chosen as the algorithm for SHA3. However, for SHA3 some parameters were held static for Keccak. In particular, an l = 6 parameter was chosen to set the block size and number of rounds. Another restriction given the application of SHA3, is that the output be constrained to hash size (eg. 512-bits). So even though Keccak's sponge construction allows for more pseudo-random data, SHA3 only uses the first N bits, where N is the desired hash size. The l parameter along with the hash size, sets the block size for the hash. For example, in the case of a 512-bit hash size, the block size becomes 576-bits. See the implementation for more details.

As a side, I found it really interesting that Keccak could be used to squeeze out more pseudo-random data and that data could be used as a stream cipher (see SHA-3 and The Hash Function
Keccak pg. 8
). Makes me wonder about the pseudo-random properties of Keccak. How much data can be squeezed out? How does a potential key size relate to how much data can be used for the cipher? Interesting stuff.

If you want more details about SHA3 and prefer video, check out Christof Paar's SHA3 lecture on it. I found his lectures on it quite enjoyable.

After an overview of the algorithm, my pair programming buddy suggested the common initialize-update-finalize pattern seen in other has hashing function implementations. After learning about the Keccak sponge construction, the initialize-update-finalize pattern still seemed like a good fit for the implementation. For example, this tiny_sha3 implementation used that pattern. I used that reference to help construct the keccak_f function and it made implementation much easier.


Some reference implementations of SHA2 were provided on the bounty. Knowing that the initialize-update-finalize pattern would work and seeing a bunch of other crypto work done in this package, I decided to add our implementation to that, following with their architecture. This helps reduce dependency complexity as it would get included for free if developers needed other implementations as well. I also like following existing standards or implementations because as you may know, the best way to mess with a standard is to create your own standard.

One notable detail was going between Nat8 and Nat64 operations. Accessing bytes in a buffer is easiest with Nat8, where as accessing quad words in the three-dimensional Keccak array was best with Nat64. As far as I know, there's no union structure as found in C, so I needed some way of XORing bytes within the quad words. I created pack64 and unpack64 functions to help me with that. These two functions would take an array of Nat8s and pack them into a single Nat64 and vice versa.

To reduce the number of times I needed to call the packing functions, I decided to pack Nat8s into Nat64s using a temporary buffer (qbuf) and then XOR each buffered input Nat64 with the Nat64s in the Keccak internal state. The drawback here is when many small write functions are called. Each write causes at least one whole Nat64 XOR operation. I chose this approach because I figured all the data massaging likely outweighed the Nat64 XOR which likely translated into a single native machine instruction anyway.

Alternatively, I could have unpacked the Nat64, did an XOR on the Nat8 and then repacked the Nat64, assigning it in place. All that data converting seemed like it would be slower than the Nat64 XOR alone. If I find the time to benchmark I may revisit. If someone knows better, please let us know.

Comments on Pair Programming

This exercise was likely not a good candidate for experimenting with pair programming because of all the research needed. Either that or we need to find a better way to keep up on the scheduling. Much of the researching was done individually with 1-2 hour sync points intervals. On the other hand, there were quite a few good moments of knowledge sharing that was occurring at those sync points. In the end, some confusion over the bounty payment ended up shifting motivation onto other projects. I decided to focus on it because of my desire to contribute more to the open source in general.

One comment in general is the cost due to loss of freedom when pair programming. The approach requires both people to schedule time where both people can focus on the task at hand. This means shifting meetings and reducing work time flexibility typically allowed in remote working environments. I can also see some people becoming tired for a variety of reasons, making the approach less appealing. If you're the tired one, you feel both unmotivated to contribute and/or guilty you're not pulling your weight. One immediate solution to this would be breaks, naps, or whatever needed to help bring an equal effort but then we're back to the scheduling problem.


Overall the project was engaging and interesting. The bounty provided by ICDevs created a good background with an accomplishable scope and the pair programming approach provided some additional insights. Motoko was easy to pick up and enjoyable to work with.

The repository for the project can be found at https://github.com/react0r-com/crypto.mo/