This post assumes the reader knows what a blockchain is.
Layer 1
Let’s start with what a layer 1 blockchain is. A layer 1 blockchain is your traditional blockchain, complete with peer-to-peer networking, storage, and consensus. Many well known blockchains are layer 1 blockchains, including Ethereum, Bitcoin, and Solana. A layer 1 blockchain has everything needed to execute and store transactions on the chain.
Layer 2
In order to improve on some deficiency in a particular blockchain, other, higher-level blockchains have arisen that build on top of layer 1 blockchains. These are “layer 2” blockchains. They use an underlying layer 1 blockchain but attempt to improve on some parts of it. Typical problems they attempt to solve are speed and costs. These chains have their own execution environment but then interact with the layer 1 blockchain. Examples of layer 2 blockchains include the Lightning Network on Bitcoin; and Arbitrum, Polygon, and Optimism on Ethereum.
Layer 0
In order to allow builders to create their own custom blockchain, layer 0 blockchains have arisen. A layer 0 blockchain is a blockchain that also supports multiple layer 1 blockchains. It contains all the building blocks needed to create a layer 1 chain and supports cross-chain communication. These building blocks include consensus, peer-to-peer networking, virtual machines for executing smart contracts, and block production. Examples of layer 0 blockchains are Avalanche, Horizen, Polkadot, Cosmos, and Metal Blockchain.
Conclusion
If you want to create a dapp, a decentralized application, you will most likely create this on a layer 1 or layer 2 chain, depending on your needs. If you want to create your own blockchain and have as much control over the operations as you need, you can build one from scratch or save a lot of time and gain built-in benefits by building on a layer 0 chain.
Thanks to Curry for putting on this incredible event for young women in high school to get exposure to computing, technology, and data science.
Session Description
Cryptocurrencies, NFTs, and Smart Contracts are some of the hottest buzzwords these days. But they’re really an evolution of technologies that came before them.
We’ll delve into the source and meaning of these buzzwords, and what the technologies could mean for our everyday lives. Then I’ll show you how you can get involved right now, by creating your own digital collectible (NFT).
Validating is a form of participation in a blockchain ecosystem. A validator runs a piece of software, commonly referred to as a “node”. This node is one member of a distributed network of nodes. All nodes keeps some copy of the blockchain. A validator node does work to verify each transaction. It checks that a transaction is accurate and allowed.
Validators are incentivized to behave correctly by the potential of block rewards. If a validator does not behave according to the rules of the chain, it will receive no rewards. It costs money to run a node, so in essence, the validator will lose money. (In some blockchain ecosystems a validator could forfeit their stake for misbehaving, but not on Metal Blockchain.)
A place to host your validator node. This could be a cloud provider such as AWS, Azure, Google Cloud, or Hetzner; or it could be a computer you host yourself.
I was privileged to share be a presenter at at Boston Code Camp 34 in Burlington on 5/29/23.
Session description:
Cryptocurrency, smart contracts, web3, and blockchain may seem scary, but they’re really just a new domain with new programming languages and frameworks. As devs, we’re used to that. Come and hear about the various ways you can build apps on or for blockchains.
We’ll discuss building blockchains, writing smart contracts to run on blockchains, writing non-smart contract code to access blockchain data, and last but not least web3. Some code samples will be shown.
This post is meant to describe delegating and being a delegator on the Metal Blockchain.
In the blockchain world, delegating is a form of participation in a particular blockchain’s ecosystem.
Let’s start out with some definitions.
Definitions
blockchain
a decentralized, immutable ledger. You can think of a blockchain as a database that once written can never be updated, and because it is decentralized, many copies of this database exist across the internet. Metal Blockchain is one instance of a blockchain. See metalblockchain.org for more details specific to Metal Blockchain.
token
a cryptographic asset that can be utilized in a blockchain. Examples of tokens include USDC (US dollar coin), which represents the US dollar and MTL, a utility token which allows holders to have lower fees on Metal Pay. These are often referred to as cryptocurrencies, but not all tokens are intended to act as currencies.
consensus
a method of agreeing on an outcome. Blockchains need a way of ensuring that the data written on-chain is accurate. Each blockchain will use a consensus mechanism for this. The two most commonly used consensus mechanisms are proof-of-work (bitcoin) and proof-of-stake (metal blockchain). For more info on the differences see here. Metal Blockchain uses the proof-of-stake method for verifying transactions, aided by Avalanche consensus.
validator
a node participating in consensus. A validator uses a piece of software that runs continually. It keeps a copy of the blockchain, and it uses its tokens to create and validate new blocks. If a validator is a good actor, it receives rewards. If it is a bad actor, it receives no rewards. Rewards are commensurate with the size of the stake.
stake
tokens used by a validator to participate in consensus. Validators must put up a stake, and they are incentivized to act honestly by the potential rewards they will earn for doing so. Tokens used in staking are locked up for some period of time. While they are locked up, they can not be withdrawn. For Metal Blockchain, this would be METAL tokens.
delegator
an individual that delegates tokens to a validator. Delegating tokens is also known as staking. A delegator stakes their tokens and will receive them back when the staking period ends. The delegator will also receive a share of the rewards.
What / Why
So, delegating is a form of participation in a blockchain ecosystem. An individual delegates tokens to a validator. The validator, in turn, uses those tokens to actively participate in the operation of the blockchain.
You are purchasing tokens and then locking them up for a period of time.
Why is this useful for the Metal blockchain?
You are helping to ensure the accuracy and security of the blockchain. A healthy blockchain needs many different validators and many stakers to ensure that there can be no monopoly over the operation of the chain.
What’s in it for me?
Rewards and support of the blockchain ecosystem and community.
What are my ongoing responsibilities?
There are no ongoing responsibilities. When the staking period ends, you can withdraw your tokens or re-stake.
What are the risks?
There are no risks for your METAL tokens. When the staking period ends you get them back. However, the $ value of your tokens could go down while you have them locked up. And you may not receive any rewards if there are problems with the validator you chose.
To find your C-chain address in your METAL Blockchain wallet – On this widget on the home page, go to the Contract tab. 0xb0ba01d6d586056fd7177432ad6a6f53a9bf7725 is the C-chain address for this wallet.
find C-Chain address
Move tokens from the C-Chain to the P-Chain:
move tokens from C-Chain to P-Chain
Delegate to a validator, choose validator:
choose a delegator
Delegate to a validator, choose the amount and term:
Background (or Why would I even think to do this?): I inherited some useful scripts that were written in perl. When I started working with them, it was far easier to refresh my decade-old knowledge of perl than to rewrite in some other language. The scripts update data in a database, and since my application is hosted in Azure, I decided to have these scripts write to SQLAzure. This is no problem in perl. It’s pretty much the same as connecting to any MSSQL db. Since the data that these scripts generate is time sensitive, I really need to schedule them to run. (Most of them need to be run daily, and a couple of them more often.) Since my main app, which uses this data, runs in Azure already, and my database is SQL Azure, I decided that I should look into running these scripts from Azure itself. So now, I have these scripts running under my Azure Web Role deployment, scheduled with Quartz.Net.
The Project:
It includes a web role that includes a startup task. The startup task uses a powershell script to download Strawberry Perl from blob storage and then unzips it with 7zip. Lastly, the startup task installs required CPAN modules.
In the OnStart() method of the web role, a quartz.net perl job is scheduled to run periodically (1 minute in the sample project, below).
When the quartz job is triggered to run, the perl job starts the perl process and executes the perl script. It captures standard output and standard error, and writes them to the trace logs.
That’s really all there is to it, though it’s trickier to set up than I had thought. I’ve posted the solution to github, to hopefully make this easier for the next person.
If you need to do this or something similar to this, I highly recommend the following articles/blog posts. They were incredibly valuable to me:
After downloading, follow these steps to get a working solution:
use nuget to retrieve Quartz.net
download 7za.exe, the 7-Zip Command Line Version, available here: http://www.7-zip.org/download.html; place this in the root of WebRole1 and set “Copy to Output Directory” to “Copy always”
set the Diagnostics Connection String in ServiceConfiguration.Cloud.cscfg
set the url to the strawberry perl zip file in downloadPerl.ps1
This should run in the development environment and in Azure. You can browse to the startup log files in \approot\bin\startuplogs. In the development environment, during debugging, this would be under [Your Cloud Project]\csx\Debug\roles\WebRoot1. In the cloud I found this under an E:\ or F:\ drive (while connecting via RDP).
Some gotchas (or maybe tips):
Multiple instances of this role will each execute the perl scripts on its own schedule. Therefore, the scripts need to be idempotent. In the long run I don’t want this behavior, so the next step for this project is to use blob leases in the perl job. I plan to deploy the script changes via blob, so at the start of a job, the job will try to acquire a blob lease on the script. If the job can acquire a lease it will download the script, execute it, and release the lease. If the job cannot acquire a lease it won’t do anything. The scripts already contain logic to make sure the data was not already generated by a different run of the script. In this case a script will start, but it will notice that there’s nothing to do and end.
Because I used a feature of powershell 2, I needed to specify osFamily=”2”, so my role would run on Server 2008 R2. (ServiceConfiguration.Cloud.cscfg)
I initially used powershell and shell.application to unzip the perl zip file. This worked quite well in the development environment, but failed in Azure. After being unsuccessful at tracking down the cause, I switched to 7zip which just worked. I would have preferrred not having this dependency.
I made sure to log/trace almost every step of the startup. It was invaluable to be able to browse to the startup logs in the development environment, and later in Azure (via RDP) to see what was going on.
I had to explicitly add a DiagnosticMonitorTraceListener to my WebRole OnStart, so that standard output and standard error from my perl job execution would also be logged. (See Neil MacKenzie’s article)
This is for my own future reference. I had nunit tests that I was able to debug without any problem in Visual Studio 2008, with a dll that was targeting .Net 2, but I needed a feature in .Net 4 for a POC. I converted the Visual Studio project to VS 2010, and changed the dll and unit test dll to compile for .Net 4. The tests still ran fine, but I could no longer break in the debugger (from F5). In order to do this, I needed to target .Net 4 framework with the nunit /framework switch (see image), and…
to configure nunit-console-x86.exe to run with .Net 4 by commenting out supportedRuntime version=”v2.0.50727” in C:\Program Files (x86)\NUnit 2.6\bin\nunit-console-x86.exe.config, like so:
<?xml version=”1.0″ encoding=”utf-8″?> <configuration> <!– The .NET 2.0 build of the console runner only runs under .NET 2.0 or higher. The setting useLegacyV2RuntimeActivationPolicy only applies under .NET 4.0 and permits use of mixed mode assemblies, which would otherwise not load correctly. –> <startup useLegacyV2RuntimeActivationPolicy=”true”> <!– Comment out the next line to force use of .NET 4.0 –> <!– <supportedRuntime version=”v2.0.50727″ />–> <supportedRuntime version=”v4.0.30319″ /> </startup>
<runtime> <!– Ensure that test exceptions don’t crash NUnit –> <legacyUnhandledExceptionPolicy enabled=”1″ />
<!– Run partial trust V2 assemblies in full trust under .NET 4.0 –> <loadFromRemoteSources enabled=”true” />
<!– Look for addins in the addins directory for now –> <assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″> <probing privatePath=”lib;addins”/> </assemblyBinding>
</runtime>
</configuration>
The top screenshot also shows that I’m redirecting standard output to a file. This is because I’m doing some crude performance analysis in the tests, collecting timing information and then writing it to standard output. This file (“TestResults.txt”) gets written to the same directory that the dll is in. I’m also targeting a particular test category (“MyCategory”), because the test suite has many more tests, but I only care about one group (aka category).
The NCAA collects player statistics from many different sports, but if you want to get that data, it’s not always easy. It’s not like they have an open API. Recently, I wanted to get the season-long statistics for all NCAA baseball players, divisions 1-3, so that data could be used for other purposes. I wrote a screen scraper to do it. It’s a very simple script, which crawls the site, scraping the information and outputting it to csv (comma separated values) so it can then be imported into an Excel spreadsheet or a database. Highlights:
Can scrape 1 school’s worth of baseball player data, if you know the NCAA’s numeric identifier for the school.
Can scrape all men’s baseball player data for all NCAA schools for which data is available. (Not all schools have data listed)
Implementation details:
It’s written in php.
It’s written for men’s baseball, but it could be made more generic by changes to the parsing of the HTML pages and the generation of the csv.
It could be changed to pull another sport’s data in under an hour (probably closer to 1/2 hour).
It does not include the NCAA’s player ids in the output, but that would be easy to add.
My challenges:
I hadn’t written anything serious in php in a while, since my day job consists mostly of Java, JavaScript, C#, and VB.Net. I chose php only because I suspected that the person who was going to use this would have an easier time with this than another language.
Because the NCAA.org site requires an established Java session, each request to the site is actually made in 2 requests. The first retrieves a jsessionid, and the second requests the data, attaching the jsessionid. This makes the application take twice as many requests as should be needed. (This certainly could be improved, but considering the script is not intended to be run many times, this is probably sufficient.)
Considering I wrote it in 4-5 hours, in a language I haven’t written serious code with in several years, and considering I only needed it for a 1 time scraping, I think it came out ok. I don’t expect to need this script again. The code is available here:
I finally got a chance to finish reading “High Performance JavaScript”. This has been on my to do list for almost a year. I bought the book after seeing an online video of Nicholas Zakas speaking about JavaScript performance, and when I found out that he would be speaking at the Boston Web Performance Group, I finally decided to stop putting it off and read it. I’m glad I did, and I’m glad I got to see his presentation in person. This book is chock full of tips on writing performant JavaScript. Admittedly, browser vendors are improving JavaScript engines at a rapid pace, making some techniques less important than they were just a couple of years ago, but the insights here are still indispensible. Understanding how the JavaScript engines work under the covers should be mandatory for anyone that writes front-end JavaScript for a living, and this is one of the best resources I know to learn this.
If writing JavaScript is a significant part of your job, I highly recommend that you read this book . A lot of the focus is on front-end JavaScript, but much of the information would apply in server-side JavaScript too (Node.js). I couldn’t give it 5 stars, just because the rapid evolution of JavaScript engines & browser technologies makes it a little murky as to which information and/or techniques discussed are the most important…today. I also didn’t particularly like the abrupt changes of writing style by the guest authors, but that’s a minor annoyance, and shouldn’t stop anyone from jumping in.
Overview:
Preface – A brief overview of the history of JavaScript and JavaScript engines.
Ch. 1 – Loading and Execution – This chapter discusses the JavaScript source files themselves and what impact their loading and execution has on the actual and perceived performance of a web page. It gets into the nitty gritty of how to avoid having scripts that block the UI on the page, and how to minimize the number of HTTP requests the page makes.
Ch. 2 – Data Access – This covers where data values are stored within your JavaScript program, i.e. literals, variables, arrays, and objects. There is a cost to accessing your data, and this chapter is worth the price of the whole book, just because of the explanations of identifier resolution and scope chains.
Ch. 3 – DOM Scripting – This was written by Stoyan Stefanov (http://phpied.com). It covers the interactions between the browsers’ scripting engines and DOM engines. It has very good explanations of html collections and how to work with them in the most performant way. It covers browser repaints and reflows and how to minimize them. And it goes over event delegation and using parent event handlers to handle events on children to minimize the quantity of event handlers on a page.
Ch. 4 – Algorithms and Flow Control – This chapter discusses the performance implications of how you use loops and conditionals, and how you structure your algorithms. It includes comparisons of the different loop constructs, of recursion vs. iteration, etc.
Ch. 5 – Strings and Regular Expressions – This was written by Steven Levithan (http://blog.stevenlevithan.com/). This chapter explains the common pitfalls to nonperforming string operations and regexes and how to avoid them. In doing so, the author describes what the browser is doing under the covers… interesting, but probably needs to be used as a reference. (i.e. not something I’ll just remember after one read.)
Ch. 6 – Responsive Interfaces – Covers the Browser UI Thread and how UI updates and JavaScript code execution share the thread. Discusses techniques to reduce the time of long-running JavaScript using timers, or, in newer browsers, Web Workers.
Ch. 7 – Ajax – This was written by Ross Harmes (http://techfoolery.com/). – “This chapter examines the fastest techniques for sending data to and receiving it from the server, as well as the most efficient formats for encoding data.” There are some interesting things in this chapter that I rarely see talked about elsewhere, such as Multipart XHR, image beacons, and JSON-P.
Ch. 8 – Programming Practices – Covers some programming practices to avoid, such as avoiding double evaluation, and not repeating browser detection code unnecessarily. Also covers practices to use, such as object and array literals and using native methods when available.
Ch. 9 – Building and Deploying High-Performance JavaScript Applications – This was written by Julien Lecomte (http://www.julienlecomte.net/blog/category/web-development/). This chapter discusses the important server-side processes and tasks that go into better performing web applications, including combining and minifying resources, gzipping, applying proper cache headers, and using a CDN.
Ch. 10 – Tools – This was written by Matt Sweeney of Yahoo. This chapter lists tools for profiling (not debugging) your application. This breaks down into JavaScript profiling and Network analysis, and includes descriptions of many useful tools such as native browser profilers, Page Speed, Fiddler, Charles Proxy, YSlow, and dynaTrace Ajax Edition.
I’m taking a small break from my technology posts to blog about an event I was privileged (and lucky) to attend in December. It was the FanDuel Fantasy Football Championship which took place in Las Vegas. Anyone that knows me knows that besides my family, my biggest interests are sports, classic movies, and programming, not necessarily in that order. So it should come as no surprise that I play fantasy sports. I’ve been playing in traditional fantasy football leagues for years, but last year I discovered weekly/daily fantasy via FanDuel.com. These are salary cap games, where you ‘buy’ players for your team and need to stay under a salary cap while filling out an entire roster. Just like other fantasy games you accrue points when your player scores or accumulates other stats that are significant in the fantasy game. The unique aspect of these games is that they are daily (or weekly depending on the sport), so almost any day of the year you can compete in a fantasy sports game. And because of the current state of gambling laws, fantasy sports is not considered gambling, so you can enter pay games and win real money.
That being said, I don’t play every day, but I follow football and hockey closely enough that I always think I can enter a roster and win. Guess what? I did, and more than once. Week 1 of the NFL season, I came in first place in one of the FanDuel football tournaments. That earned me enough funds to be able to enter other tournaments for the rest of the NFL season, and I ended up coming in first place in the Week 11 Qualifier of the Fantasy Football (FFFC) Championship. The prize for that was a trip to Las Vegas and an entry into a Championship Fantasy tournament with 11 other finalists. How cool is that?! (Here’s my interview with FanDuel.)
Well, I have got to say, that trip to Las Vegas and the entire FFFC Championship was one of the coolest experiences I have ever had. I got to bring my husband, and we left a day early, just to have a little more time to enjoy ourselves. The representatives from FanDuel were incredible hosts, and the other qualifiers and their guests were super nice.
The championship contest itself was nail-biting. I was in first place through a lot of the early afternoon games, but could tell I would probably slip in the late games. (All of my players, except 1, were playing in the early games.) Lucky for me, I didn’t slip that far. I finished the day in 3rd place!! Not bad for the only girl in the final. My NFL thanks go to Rob Gronkowski who had a big fantasy day and to the Green Bay coaching staff that pulled Aaron Rodgers in the 2nd half of the Packers game. (I didn’t have him and others did, so I would have slipped further.) And my jeers go to Michael Turner who had one of the best potential matchups of his season but had an awful game (stats-wise). I should have gone with MJD .
Enough football… This was such a fun event that I aspire to reach the finals again next year. And now I’m even more hooked on fantasy sports. I’ve noticed that more and more of these daily fantasy sites are popping up, and it seems to be a growing industry. If you’re into sports or fantasy sports, you might like to give this stuff a try.