The Best 5-minute binary trading tool especially for ...

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store]( Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
Next, press esc to exit insert mode, then type in the following:
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

Ethereum on ARM. New Eth2.0 Raspberry Pi 4 image for joining the Medalla multi-client testnet. Step-by-step guide for installing and activating a validator (Prysm, Teku, Lighthouse and Nimbus clients included)

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 medalla testnet.
The image takes care of all the necessary steps to join the Eth2.0 Medalla multi-client testnet [1], from setting up the environment and formatting the SSD disk to installing, managing and running the Eth1.0 and Eth2.0 clients.
You will only need to choose an Eth2.0 client, start the beacon chain service and activate / run the validator.
Note: this is an update for our previous Raspberry Pi 4 Eth2 image [2] so some of the instructions are directly taken from there.




You will need an SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
Use an USB portable SSD disk such as the Samsung T5 Portable SSD.
Use an USB 3.0 External Hard Drive Case with a SSD Disk. In our case we used a Inateck 2.5 Hard Drive Enclosure FE2011. Make sure to buy a case with an UASP compliant chip, particularly, one of these: JMicron (JMS567 or JMS578) or ASMedia (ASM1153E).
In both cases, avoid getting low quality SSD disks as it is a key component of your node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue).
1.- Download the image:
SHA256 149cb9b020d1c49fcf75c00449c74c6f38364df1700534b5e87f970080597d87
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file.
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [10]
Open a terminal and check your MicroSD device name running:
sudo fdisk -l
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
sudo dd bs=1M if=ubuntu-20.04.1-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7-8 minutes in order to allow the script to perform the necessary tasks to install the Medalla setup (it will reboot again)
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to log in twice.
6.- Forward 30303 port in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model. You will need to open additional ports as well depending on the Eth2.0 client you’ve chosen.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog
8.- Grafana Dashboards
There are 5 Grafana dashboards available to monitor the Medalla node (see section “Grafana Dashboards” below).

The Medalla Eth2.0 multi-client testnet

Medalla is the official Eth2.0 multi-client testnet according to the latest official specification for Eth2.0, the v0.12.2 [11] release (which is aimed to be the final) [12].
In order to run a Medalla Eth 2.0 node you will need 3 components:
The image takes care of the Eth1.0 setup. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts to sync the Goerli testnet.
Follow these steps to enable your Eth2.0 Ethereum node:
We need to get 32 Goerli ETH (fake ETH) ir order to make the deposit in the Eth2.0 contract and run the validator. The easiest way of getting ETH is by joining Prysm Discord's channel.
Open Metamask [14], select the Goerli Network (top of the window) and copy your ETH Address. Go to:
And open the “request-goerli-eth” channel (on the left)
!send $YOUR_ETH_ADDRESS (replace it with the one copied on Metamask)
You will receive enough ETH to run 1 validator.
Now it is time to create your validator keys and the deposit information. For your convenience we’ve packaged the official Eth2 launchpad tool [4]. Go to the EF Eth2.0 launchpad site:
And click “Get started”
Read and accept all warnings. In the next screen, select 1 validator and go to your Raspberry Pi console. Under the ethereum account run:
cd && deposit --num_validators 1 --chain medalla
Choose your mnemonic language and type a password for keeping your keys safe. Write down your mnemonic password, press any key and type it again as requested.
Now you have 2 Json files under the validator_keys directory. A deposit data file for sending the 32 ETH along with your validator public key to the Eth1 chain (goerli testnet) and a keystore file with your validator keys.
Back to the Launchpad website, check "I am keeping my keys safe and have written down my mnemonic phrase" and click "Continue".
It is time to send the 32 ETH deposit to the Eth1 chain. You need the deposit file (located in your Raspberry Pi). You can, either copy and paste the file content and save it as a new file in your desktop or copy the file from the Raspberry to your desktop through SSH.
1.- Copy and paste: Connected through SSH to your Raspberry Pi, type:
cat validator_keys/deposit_data-$FILE-ID.json (replace $FILE-ID with yours)
Copy the content (the text in square brackets), go back to your desktop, paste it into your favourite editor and save it as a json file.
2.- Ssh: From your desktop, copy the file:
scp [email protected]$YOUR_RASPBERRYPI_IP:/home/ethereum/validator_keys/deposit_data-$FILE_ID.json /tmp
Replace the variables with your data. This will copy the file to your desktop /tmp directory.
Upload the deposit file
Now, back to the Launchpad website, upload the deposit_data file and select Metamask, click continue and check all warnings. Continue and click “Initiate the Transaction”. Confirm the transaction in Metamask and wait for the confirmation (a notification will pop up shortly).
The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (that includes the validator public key) and the Validator will be enabled.
Congrats!, you just started your validator activation process.
Time to choose your Eth2.0 client. We encourage you to run Lighthouse, Teku or Nimbus as Prysm is the most used client by far and diversity is key to achieve a resilient and healthy Eth2.0 network.
Once you have decided which client to run (as said, try to run one with low network usage), you need to set up the clients and start both, the beacon chain and the validator.
These are the instructions for enabling each client (Remember, choose just one Eth2.0 client out of 4):
1.- Port forwarding
You need to open the 9000 port in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable lighthouse-beacon
sudo systemctl start lighthouse-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
lighthouse account validator import --directory=/home/ethereum/validator_keys
Then, type your previously defined password and run:
sudo systemctl enable lighthouse-validator
sudo systemctl start lighthouse-validator
The Lighthouse beacon chain and validator are now enabled

1.- Port forwarding
You need to open the 13000 and 12000 ports in your router (both UDP and TCP)
2.- Start the beacon chain
Under the ethereum account, run:
sudo systemctl enable prysm-beacon
sudo systemctl start prysm-beacon
3.- Start de validator
We need to import the validator keys. Run under the ethereum account:
validator accounts-v2 import --keys-dir=/home/ethereum/validator_keys
Accept the default wallet path and enter a password for your wallet. Now enter the password previously defined.
Lastly, set up your password and start the client:
echo "$YOUR_PASSWORD" > /home/ethereum/validator_keys/prysm-password.txt
sudo systemctl enable prysm-validator
sudo systemctl start prysm-validator
The Prysm beacon chain and the validator are now enabled.

1.- Port forwarding
You need to open the 9151 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
Under the Ethereum account, check the name of your keystore file:
ls /home/ethereum/validator_keys/keystore*
Set the keystore file name in the teku config file (replace the $KEYSTORE_FILE variable with the file listed above)
sudo sed -i 's/changeme/$KEYSTORE_FILE/' /etc/ethereum/teku.conf
Set the password previously entered:
echo "yourpassword" > validator_keys/teku-password.txt
Start the beacon chain and the validator:
sudo systemctl enable teku
sudo systemctl start teku
The Teku beacon chain and validator are now enabled.

1.- Port forwarding
You need to open the 19000 port (both UDP and TCP)
2.- Start the Beacon Chain and the Validator
We need to import the validator keys. Run under the ethereum account:
beacon_node deposits import /home/ethereum/validator_keys --data-dir=/home/ethereum/.nimbus --log-file=/home/ethereum/.nimbus/nimbus.log
Enter the password previously defined and run:
sudo systemctl enable nimbus
sudo systemctl start nimbus
The Nimbus beacon chain and validator are now enabled.

Now you need to wait for the Eth1 blockchain and the beacon chain to get synced. In a few hours the validator will get enabled and put into a queue. These are the validator status that you will see until its final activation:
Finally, it will get activated and the staking process will start.
Congratulations!, you join the Medalla Eth2.0 multiclient testnet!

Grafana Dashboards

We configured 5 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 clients. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
http://replace_with_your_IP:3000 user: admin passwd: ethereum 
There are 5 dashboards available:
Lots of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks fields are aligned or find Eth2.0 chain info.

Updating the software

We will be keeping the Eth2.0 clients updated through Debian packages in order to keep up with the testnet progress. Basically, you need to update the repo and install the packages through the apt command. For instance, in order to update all packages you would run:
sudo apt-get update && sudo apt-get install geth teku nimbus prysm-beacon prysm-validator lighthouse-beacon lighthouse-validator
Please follow us on Twitter in order to get regular updates and install instructions.


submitted by diglos76 to ethereum [link] [comments]

NASPi: a Raspberry Pi Server

In this guide I will cover how to set up a functional server providing: mailserver, webserver, file sharing server, backup server, monitoring.
For this project a dynamic domain name is also needed. If you don't want to spend money for registering a domain name, you can use services like, or Between the two, I prefer, because you can set every type of DNS record (TXT records are only available after 30 days, but that's worth not spending ~15€/year for a domain name), needed for the mailserver specifically.
Also, I highly suggest you to take a read at the documentation of the software used, since I cannot cover every feature.



(minor utilities not included)


First thing first we need to flash the OS to the SD card. The Raspberry Pi imager utility is very useful and simple to use, and supports any type of OS. You can download it from the Raspberry Pi download page. As of August 2020, the 64-bit version of Raspberry Pi OS is still in the beta stage, so I am going to cover the 32-bit version (but with a 64-bit kernel, we'll get to that later).
Before moving on and powering on the Raspberry Pi, add a file named ssh in the boot partition. Doing so will enable the SSH interface (disabled by default). We can now insert the SD card into the Raspberry Pi.
Once powered on, we need to attach it to the LAN, via an Ethernet cable. Once done, find the IP address of your Raspberry Pi within your LAN. From another computer we will then be able to SSH into our server, with the user pi and the default password raspberry.


Using this utility, we will set a few things. First of all, set a new password for the pi user, using the first entry. Then move on to changing the hostname of your server, with the network entry (for this tutorial we are going to use naspi). Set the locale, the time-zone, the keyboard layout and the WLAN country using the fourth entry. At last, enable SSH by default with the fifth entry.

64-bit kernel

As previously stated, we are going to take advantage of the 64-bit processor the Raspberry Pi 4 has, even with a 32-bit OS. First, we need to update the firmware, then we will tweak some config.
$ sudo rpi-update
$ sudo nano /boot/config.txt
$ sudo reboot

swap size

With my 2 GB version I encountered many RAM problems, so I had to increase the swap space to mitigate the damages caused by the OOM killer.
$ sudo dphys-swapfiles swapoff
$ sudo nano /etc/dphys-swapfile
$ sudo dphys-swapfile setup
$ sudo dphys-swapfile swapon
Here we are increasing the swap size to 1 GB. According to your setup you can tweak this setting to add or remove swap. Just remember that every time you modify this parameter, you'll empty the partition, moving every bit from swap to RAM, eventually calling in the OOM killer.


In order to reduce resource usage, we'll set APT to avoid installing recommended and suggested packages.
$ sudo nano /etc/apt/apt.config.d/01noreccomend
APT::Install-Recommends "0"; APT::Install-Suggests "0"; 


Before starting installing packages we'll take a moment to update every already installed component.
$ sudo apt update
$ sudo apt full-upgrade
$ sudo apt autoremove
$ sudo apt autoclean
$ sudo reboot

Static IP address

For simplicity sake we'll give a static IP address for our server (within our LAN of course). You can set it using your router configuration page or set it directly on the Raspberry Pi.
$ sudo nano /etc/dhcpcd.conf
interface eth0 static ip_address= static routers= static domain_name_servers= 
$ sudo reboot


The first feature we'll set up is the mailserver. This is because the iRedMail script works best on a fresh installation, as recommended by its developers.
First we'll set the hostname to our domain name. Since my domain is, the domain name will be
$ sudo hostnamectl set-hostname
$ sudo nano /etc/hosts localhost ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6allrouters naspi 
Now we can download and setup iRedMail
$ sudo apt install git
$ cd /home/pi/Documents
$ sudo git clone
$ cd /home/pi/Documents/iRedMail
$ sudo chmod +x
$ sudo bash
Now the script will guide you through the installation process.
When asked for the mail directory location, set /vavmail.
When asked for webserver, set Nginx.
When asked for DB engine, set MariaDB.
When asked for, set a secure and strong password.
When asked for the domain name, set your, but without the mail. subdomain.
Again, set a secure and strong password.
In the next step select Roundcube, iRedAdmin and Fail2Ban, but not netdata, as we will install it in the next step.
When asked for, confirm your choices and let the installer do the rest.
$ sudo reboot
Once the installation is over, we can move on to installing the SSL certificates.
$ sudo apt install certbot
$ sudo certbot certonly --webroot --agree-tos --email [email protected] -d -w /vawww/html/
$ sudo nano /etc/nginx/templates/ssl.tmpl
ssl_certificate /etc/letsencrypt/live/; ssl_certificate_key /etc/letsencrypt/live/; 
$ sudo service nginx restart
$ sudo nano /etc/postfix/
smtpd_tls_key_file = /etc/letsencrypt/live/; smtpd_tls_cert_file = /etc/letsencrypt/live/; smtpd_tls_CAfile = /etc/letsencrypt/live/; 
$ sudo service posfix restart
$ sudo nano /etc/dovecot/dovecot.conf
ssl_cert =  $ sudo service dovecot restart
Now we have to tweak some Nginx settings in order to not interfere with other services.
$ sudo nano /etc/nginx/sites-available/90-mail
server { listen 443 ssl http2; server_name; root /vawww/html; index index.php index.html include /etc/nginx/templates/misc.tmpl; include /etc/nginx/templates/ssl.tmpl; include /etc/nginx/templates/iredadmin.tmpl; include /etc/nginx/templates/roundcube.tmpl; include /etc/nginx/templates/sogo.tmpl; include /etc/nginx/templates/netdata.tmpl; include /etc/nginx/templates/php-catchall.tmpl; include /etc/nginx/templates/stub_status.tmpl; } server { listen 80; server_name; return 301 https://$host$request_uri; } 
$ sudo ln -s /etc/nginx/sites-available/90-mail /etc/nginx/sites-enabled/90-mail
$ sudo rm /etc/nginx/sites-*/00-default*
$ sudo nano /etc/nginx/nginx.conf
user www-data; worker_processes 1; pid /varun/; events { worker_connections 1024; } http { server_names_hash_bucket_size 64; include /etc/nginx/conf.d/*.conf; include /etc/nginx/conf-enabled/*.conf; include /etc/nginx/sites-enabled/*; } 
$ sudo service nginx restart

.local domain

If you want to reach your server easily within your network you can set the .local domain to it. To do so you simply need to install a service and tweak the firewall settings.
$ sudo apt install avahi-daemon
$ sudo nano /etc/nftables.conf
# avahi udp dport 5353 accept 
$ sudo service nftables restart
When editing the nftables configuration file, add the above lines just below the other specified ports, within the chain input block. This is needed because avahi communicates via the 5353 UDP port.


At this point we can start setting up the disks. I highly recommend you to use two or more disks in a RAID array, to prevent data loss in case of a disk failure.
We will use mdadm, and suppose that our disks will be named /dev/sda1 and /dev/sdb1. To find out the names issue the sudo fdisk -l command.
$ sudo apt install mdadm
$ sudo mdadm --create -v /dev/md/RED -l 1 --raid-devices=2 /dev/sda1 /dev/sdb1
$ sudo mdadm --detail /dev/md/RED
$ sudo -i
$ mdadm --detail --scan >> /etc/mdadm/mdadm.conf
$ exit
$ sudo mkfs.ext4 -L RED -m .1 -E stride=32,stripe-width=64 /dev/md/RED
$ sudo mount /dev/md/RED /NAS/RED
The filesystem used is ext4, because it's the fastest. The RAID array is located at /dev/md/RED, and mounted to /NAS/RED.


To automount the disks at boot, we will modify the fstab file. Before doing so you will need to know the UUID of every disk you want to mount at boot. You can find out these issuing the command ls -al /dev/disk/by-uuid.
$ sudo nano /etc/fstab
# Disk 1 UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /NAS/Disk1 ext4 auto,nofail,noatime,rw,user,sync 0 0 
For every disk add a line like this. To verify the functionality of fstab issue the command sudo mount -a.


To monitor your disks, the S.M.A.R.T. utilities are a super powerful tool.
$ sudo apt install smartmontools
$ sudo nano /etc/defaults/smartmontools
$ sudo nano /etc/smartd.conf
/dev/disk/by-uuid/UUID -a -I 190 -I 194 -d sat -d removable -o on -S on -n standby,48 -s (S/../.././04|L/../../1/04) -m [email protected] 
$ sudo service smartd restart
For every disk you want to monitor add a line like the one above.
About the flags:
· -a: full scan.
· -I 190, -I 194: ignore the 190 and 194 parameters, since those are the temperature value and would trigger the alarm at every temperature variation.
· -d sat, -d removable: removable SATA disks.
· -o on: offline testing, if available.
· -S on: attribute saving, between power cycles.
· -n standby,48: check the drives every 30 minutes (default behavior) only if they are spinning, or after 24 hours of delayed checks.
· -s (S/../.././04|L/../../1/04): short test every day at 4 AM, long test every Monday at 4 AM.
· -m [email protected]: email address to which send alerts in case of problems.

Automount USB devices

Two steps ago we set up the fstab file in order to mount the disks at boot. But what if you want to mount a USB disk immediately when plugged in? Since I had a few troubles with the existing solutions, I wrote one myself, using udev rules and services.
$ sudo apt install pmount
$ sudo nano /etc/udev/rules.d/11-automount.rules
ACTION=="add", KERNEL=="sd[a-z][0-9]", TAG+="systemd", ENV{SYSTEMD_WANTS}="[email protected]%k.service" 
$ sudo chmod 0777 /etc/udev/rules.d/11-automount.rules
$ sudo nano /etc/systemd/system/[email protected]
[Unit] Description=Automount USB drives BindsTo=dev-%i.device After=dev-%i.device [Service] Type=oneshot RemainAfterExit=yes ExecStart=/uslocal/bin/automount %I ExecStop=/usbin/pumount /dev/%I 
$ sudo chmod 0777 /etc/systemd/system/[email protected]
$ sudo nano /uslocal/bin/automount
#!/bin/bash PART=$1 FS_UUID=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $3}'` FS_LABEL=`lsblk -o name,label,uuid | grep ${PART} | awk '{print $2}'` DISK1_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' DISK2_UUID='xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' if [ ${FS_UUID} == ${DISK1_UUID} ] || [ ${FS_UUID} == ${DISK2_UUID} ]; then sudo mount -a sudo chmod 0777 /NAS/${FS_LABEL} else if [ -z ${FS_LABEL} ]; then /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${PART} else /usbin/pmount --umask 000 --noatime -w --sync /dev/${PART} /media/${FS_LABEL} fi fi 
$ sudo chmod 0777 /uslocal/bin/automount
The udev rule triggers when the kernel announce a USB device has been plugged in, calling a service which is kept alive as long as the USB remains plugged in. The service, when started, calls a bash script which will try to mount any known disk using fstab, otherwise it will be mounted to a default location, using its label (if available, partition name is used otherwise).


Let's now install netdata. For this another handy script will help us.
$ bash <(curl -Ss\`)`
Once the installation process completes, we can open our dashboard to the internet. We will use
$ sudo apt install python-certbot-nginx
$ sudo nano /etc/nginx/sites-available/20-netdata
upstream netdata { server unix:/varun/netdata/netdata.sock; keepalive 64; } server { listen 80; server_name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://netdata; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } } 
$ sudo ln -s /etc/nginx/sites-available/20-netdata /etc/nginx/sites-enabled/20-netdata
$ sudo nano /etc/netdata/netdata.conf
# NetData configuration [global] hostname = NASPi [web] allow netdata.conf from = localhost fd* 192.168.* 172.* bind to = unix:/varun/netdata/netdata.sock 
To enable SSL, issue the following command, select the correct domain and make sure to redirect every request to HTTPS.
$ sudo certbot --nginx
Now configure the alarms notifications. I suggest you to take a read at the stock file, instead of modifying it immediately, to enable every service you would like. You'll spend some time, yes, but eventually you will be very satisfied.
$ sudo nano /etc/netdata/health_alarm_notify.conf
# Alarm notification configuration # email global notification options SEND_EMAIL="YES" # Sender address EMAIL_SENDER="NetData [email protected]" # Recipients addresses DEFAULT_RECIPIENT_EMAIL="[email protected]" # telegram ( global notification options SEND_TELEGRAM="YES" # Bot token TELEGRAM_BOT_TOKEN="xxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" # Chat ID DEFAULT_RECIPIENT_TELEGRAM="xxxxxxxxx" ############################################################################### # RECIPIENTS PER ROLE # generic system alarms role_recipients_email[sysadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sysadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # DNS related alarms role_recipients_email[domainadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[domainadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # database servers alarms role_recipients_email[dba]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[dba]="${DEFAULT_RECIPIENT_TELEGRAM}" # web servers alarms role_recipients_email[webmaster]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[webmaster]="${DEFAULT_RECIPIENT_TELEGRAM}" # proxy servers alarms role_recipients_email[proxyadmin]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[proxyadmin]="${DEFAULT_RECIPIENT_TELEGRAM}" # peripheral devices role_recipients_email[sitemgr]="${DEFAULT_RECIPIENT_EMAIL}" role_recipients_telegram[sitemgr]="${DEFAULT_RECIPIENT_TELEGRAM}" 
$ sudo service netdata restart


Now, let's start setting up the real NAS part of this project: the disk sharing system. First we'll set up Samba, for the sharing within your LAN.
$ sudo apt install samba samba-common-bin
$ sudo nano /etc/samba/smb.conf
[global] # Network workgroup = NASPi interfaces = eth0 bind interfaces only = yes # Log log file = /valog/samba/log.%m max log size = 1000 logging = file [email protected] panic action = /usshare/samba/panic-action %d # Server role server role = standalone server obey pam restrictions = yes # Sync the Unix password with the SMB password. unix password sync = yes passwd program = /usbin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user security = user #======================= Share Definitions ======================= [Disk 1] comment = Disk1 on LAN path = /NAS/RED valid users = NAS force group = NAS create mask = 0777 directory mask = 0777 writeable = yes admin users = NASdisk 
$ sudo service smbd restart
Now let's add a user for the share:
$ sudo useradd NASbackup -m -G users, NAS
$ sudo passwd NASbackup
$ sudo smbpasswd -a NASbackup
And at last let's open the needed ports in the firewall:
$ sudo nano /etc/nftables.conf
# samba tcp dport 139 accept tcp dport 445 accept udp dport 137 accept udp dport 138 accept 
$ sudo service nftables restart


Now let's set up the service to share disks over the internet. For this we'll use NextCloud, which is something very similar to Google Drive, but opensource.
$ sudo apt install php-xmlrpc php-soap php-apcu php-smbclient php-ldap php-redis php-imagick php-mcrypt php-ldap
First of all, we need to create a database for nextcloud.
$ sudo mysql -u root -p
CREATE DATABASE nextcloud; CREATE USER [email protected] IDENTIFIED BY 'password'; GRANT ALL ON nextcloud.* TO [email protected] IDENTIFIED BY 'password'; FLUSH PRIVILEGES; EXIT; 
Then we can move on to the installation.
$ cd /tmp && wget
$ sudo unzip
$ sudo mv nextcloud /vawww/nextcloud/
$ sudo chown -R www-data:www-data /vawww/nextcloud
$ sudo find /vawww/nextcloud/ -type d -exec sudo chmod 750 {} \;
$ sudo find /vawww/nextcloud/ -type f -exec sudo chmod 640 {} \;
$ sudo nano /etc/nginx/sites-available/10-nextcloud
upstream nextcloud { server; keepalive 64; } server { server_name; root /vawww/nextcloud; listen 80; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; fastcgi_hide_header X-Powered_By; location = /robots.txt { allow all; log_not_found off; access_log off; } rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/webfinger /public.php?service=webfinger last; location = /.well-known/carddav { return 301 $scheme://$host:$server_port/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host:$server_port/remote.php/dav; } client_max_body_size 512M; fastcgi_buffers 64 4K; gzip on; gzip_vary on; gzip_comp_level 4; gzip_min_length 256; gzip_proxied expired no-cache no-store private no_last_modified no_etag auth; gzip_types application/atom+xml application/javascript application/json application/ld+json application/manifest+json application/rss+xml application/vnd.geo+json application/ application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/bmp image/svg+xml image/x-icon text/cache-manifest text/css text/plain text/vcard text/vnd.rim.location.xloc text/vtt text/x-component text/x-cross-domain-policy; location / { rewrite ^ /index.php; } location ~ ^\/(?:build|tests|config|lib|3rdparty|templates|data)\/ { deny all; } location ~ ^\/(?:\.|autotest|occ|issue|indie|db_|console) { deny all; } location ~ ^\/(?:index|remote|public|cron|core\/ajax\/update|status|ocs\/v[12]|updater\/.+|oc[ms]-provider\/.+)\.php(?:$|\/) { fastcgi_split_path_info ^(.+?\.php)(\/.*|)$; set $path_info $fastcgi_path_info; try_files $fastcgi_script_name =404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $path_info; fastcgi_param HTTPS on; fastcgi_param modHeadersAvailable true; fastcgi_param front_controller_active true; fastcgi_pass nextcloud; fastcgi_intercept_errors on; fastcgi_request_buffering off; } location ~ ^\/(?:updater|oc[ms]-provider)(?:$|\/) { try_files $uri/ =404; index index.php; } location ~ \.(?:css|js|woff2?|svg|gif|map)$ { try_files $uri /index.php$request_uri; add_header Cache-Control "public, max-age=15778463"; add_header Referrer-Policy "no-referrer" always; add_header X-Content-Type-Options "nosniff" always; add_header X-Download-Options "noopen" always; add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Permitted-Cross-Domain-Policies "none" always; add_header X-Robots-Tag "none" always; add_header X-XSS-Protection "1; mode=block" always; access_log off; } location ~ \.(?:png|html|ttf|ico|jpg|jpeg|bcmap)$ { try_files $uri /index.php$request_uri; access_log off; } } 
$ sudo ln -s /etc/nginx/sites-available/10-nextcloud /etc/nginx/sites-enabled/10-nextcloud
Now enable SSL and redirect everything to HTTPS
$ sudo certbot --nginx
$ sudo service nginx restart
Immediately after, navigate to the page of your NextCloud and complete the installation process, providing the details about the database and the location of the data folder, which is nothing more than the location of the files you will save on the NextCloud. Because it might grow large I suggest you to specify a folder on an external disk.


Now to the backup system. For this we'll use Minarca, a web interface based on rdiff-backup. Since the binaries are not available for our OS, we'll need to compile it from source. It's not a big deal, even our small Raspberry Pi 4 can handle the process.
$ cd /home/pi/Documents
$ sudo git clone
$ cd /home/pi/Documents/minarca
$ sudo make build-server
$ sudo apt install ./minarca-server_x.x.x-dxxxxxxxx_xxxxx.deb
$ sudo nano /etc/minarca/minarca-server.conf
# Minarca configuration. # Logging LogLevel=DEBUG LogFile=/valog/minarca/server.log LogAccessFile=/valog/minarca/access.log # Server interface ServerHost= ServerPort=8080 # rdiffweb Environment=development FavIcon=/opt/minarca/share/minarca.ico HeaderLogo=/opt/minarca/share/header.png HeaderName=NAS Backup Server WelcomeMsg=Backup system based on rdiff-backup, hosted on RaspberryPi](”>docs)admin DefaultTheme=default # Enable Sqlite DB Authentication. SQLiteDBFile=/etc/minarca/rdw.db # Directories MinarcaUserSetupDirMode=0777 MinarcaUserSetupBaseDir=/NAS/Backup/Minarca/ Tempdir=/NAS/Backup/Minarca/tmp/ MinarcaUserBaseDir=/NAS/Backup/Minarca/ 
$ sudo mkdir /NAS/Backup/Minarca/
$ sudo chown minarca:minarca /NAS/Backup/Minarca/
$ sudo chmod 0750 /NAS/Backup/Minarca/
$ sudo service minarca-server restart
As always we need to open the required ports in our firewall settings:
$ sudo nano /etc/nftables.conf
# minarca tcp dport 8080 accept 
$ sudo nano service nftables restart
And now we can open it to the internet:
$ sudo nano service nftables restart
$ sudo nano /etc/nginx/sites-available/30-minarca
upstream minarca { server; keepalive 64; } server { server_name; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded_for $proxy_add_x_forwarded_for; proxy_pass http://minarca; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } listen 80; } 
$ sudo ln -s /etc/nginx/sites-available/30-minarca /etc/nginx/sites-enabled/30-minarca
And enable SSL support, with HTTPS redirect:
$ sudo certbot --nginx
$ sudo service nginx restart

DNS records

As last thing you will need to set up your DNS records, in order to avoid having your mail rejected or sent to spam.

MX record

name: @ value: TTL (if present): 90 

PTR record

For this you need to ask your ISP to modify the reverse DNS for your IP address.

SPF record

name: @ value: v=spf1 mx ~all TTL (if present): 90 

DKIM record

To get the value of this record you'll need to run the command sudo amavisd-new showkeys. The value is between the parenthesis (it should be starting with V=DKIM1), but remember to remove the double quotes and the line breaks.
name: dkim._domainkey value: V=DKIM1; P= ... TTL (if present): 90 

DMARC record

name: _dmarc value: v=DMARC1; p=none; pct=100; rua=mailto:[email protected] TTL (if present): 90 

Router ports

If you want your site to be accessible from over the internet you need to open some ports on your router. Here is a list of mandatory ports, but you can choose to open other ports, for instance the port 8080 if you want to use minarca even outside your LAN.

mailserver ports

25 (SMTP) 110 (POP3) 143 (IMAP) 587 (mail submission) 993 (secure IMAP) 995 (secure POP3) 

ssh port

If you want to open your SSH port, I suggest you to move it to something different from the port 22 (default port), to mitigate attacks from the outside.


80 (HTTP) 443 (HTTPS) 

The end?

And now the server is complete. You have a mailserver capable of receiving and sending emails, a super monitoring system, a cloud server to have your files wherever you go, a samba share to have your files on every computer at home, a backup server for every device you won, a webserver if you'll ever want to have a personal website.
But now you can do whatever you want, add things, tweak settings and so on. Your imagination is your only limit (almost).
EDIT: typos ;)
submitted by Fly7113 to raspberry_pi [link] [comments]

Unbrick a switch

Unbrick a switch
So recently I acquired some gear from ebay, it turns out one of the items was a Netgear FS728TP that was bricked. I needed some time alone to decompress, so I decided to see if I could fix it. I do not take credit for the original work of discovering the datasheet or UART pins, this was found in a google cache of an old Netgear forum. The images and guide are all original content from myself. Anyway, here goes the guide.

FS728TP UART Recovery Unbricking

If you managed to brick your FS728TP with a bad firmware update, rollback, etc. this guide aggregates data found around the net. This process involves soldering, serial communications and some basic hardware knowledge. This device uses a Marvel 88E6218-LG01 with UART p52 = Rx, p53 = Tx. U27 is similar to max232 chip, where p11 and p12 connect to the UART on the Marvel controller.


  • Netgear FS728TPv1 Firmware
  • You will need the Package for the boot rom and the for the latest firmware
  • Hyperterminal, puttyplus or something that can send files via XMODEM
  • Soldering Iron
  • FTDI breakout board or cable

Soldering UART

Being by unplugging everything and opening the case of the FS728TPv1. Once open, find U27. It will be near the back of the board, J8, the MARVELL controller, and may be under the MAC sticker. Find the pins 11 and 12 as shown in the photo. Solder a wire to each of these pins and connect them to the RX and TX pins of your FTDI cable or board. Be sure to also connect GND to a suitable location, such as a screw on the board.


WARNING: LETHAL VOLTAGE Cover the power supply with a piece of plexiglass, FR4 or other non-conductive material to protect yourself from the mains power. Use electrical tape to hold it in place. This will also act as an air duct to keep the PSU cool while the case is off.
With the FTDI chip connected to your PC, open a serial session using:
  • baudrate = 38400, data bits = 8, parity = none, stop bits = 1, flow control = none
Now, boot the switch. If nothing happens, try switching your RX/TX wires. If successful, you will be presented with a screen that says Autoboot in 2 seconds - press RETURN or Esc. to abort and enter prom
Press RETURN or Esc
The following menu will show in the terminal:
  • 1 = software download
  • 2 = flash file Erase
  • 3 = diagnostic mode
  • 4 = password recovery procedure entry
  • 5 = Set baudrate terminal
  • 6 = back
Select option 1 to flash the firmware files.


For this step, I used hyperterminal. Any other terminal with XMODEM file capabilities should work.
If you need to flash the BOOT CODE flash firmware first. This will take about 25-30 minutes. This will reenable the web interface and allow you to flash the BOOT CODE and FIRMWARE from the web interface or Smartwizard Discovery
If you already have the BOOT CODE, instead flash FIRMWARE. This will take about 25-30 minutes.

Final Steps

When the firmware has been flashed successfully, reboot the device. You should see system tests PASS and Decompressing SW from image-1.
Congratulations, you have unbricked your switch. Now sell it and get something better than a 10 year old Netgear switch.

FYI, this is cross posted to my Gist here:

edit: added photo of setup
submitted by BinaryConstruct to homelab [link] [comments]


Disclaimer: This is my arbitrary summary for myself, so there could be some misunderstandings.
If you want the full picture, I recommend reading the full thread.
But, for a guy who just settles with 'less than perfect' summary, why not sharing my own?

All the key research questions in coordicide have been answered. The challenges lying are implementing and testing our solution. We are implementing our solution into the Pollen Testnet and typing it up into our research specifications**(the specifications, while not complete, will hopefully be made publicly available soon).**
**After these tasks are done, our solution will go through a rigorous testing phase.**During this time, we will collect performance data, look for attack vectors, and tune the parameters.

the only way for IOTA and crypto-currencies in general to be adopted is via clear and strong regulatory guidelines and frameworks.
We often have the situation where a company reaches out to us and wants to use the IOTA token, but they are simply not able to due to uncertainties in regards to taxes, accounting, legal and regulatory questions.
The EU is taking a great stance with their new proposal (called MICA) to provide exactly this type of regulatory clarity and guidance we need. So we are very happy about that and see this as a great development for the adoption of IOTA.
We are very active in INATBA (in fact Julie is still on the board), are in the Executive Committee of the Digital Chamber of Commerce ( and are actively working with other regulatory bodies around the world. I think that especially in 2021, we will be much more pro-active with our outreach and efforts to push for more regulatory guidance (for the IOTA Token, for Tokenization, Smart Contracts, etc.). We are already talking with companies to start case studies around what it means to use the IOTA token - so that will be exciting.

actual product development, will really help us to convince regulators and lawmakers of what IOTA is intended for and where its potential lies.

We are actively participating in regulatory matters via entities such as INATBA, as well as with local regulators in individual countries to help shape regulations to favor the adoption of crypto.
once the use cases can display real-world value, then deployments will happen regardless.

"The multiverse" is quite an ingenious and promising idea that has many components. Actually, quite some of those are being incorporated to the Coordicide already now. The most "controversial" part, though, is the pure on-Tangle voting -- Hans thinks it should work fine while I think that it can be attacked

Several of our modules have been devloped jointly with researchers in academia. For example, our rate control module is being developed jointly with professor Robert Shorten **and his team at Imperial College. Moreover,**our team has published several papers in peer reviewed journals and conference proceedings,
We are also making sure the entire protocol is audited. First, we have a grant given to Professor Mauro Conti specifcally to vet our solution.
you may hear an announcement regarding a similar grant to a second university.Second, eventually will offer bug bounties on our testnet. Lastly, we will hire some firm to audit our software and our protocol.

I would say that the entire enterprise and also the broader crypto-community is certainly actively following our developments around Coordicide**.**
Once that is removed, and with the introduction of Tokenization and Smart Contracts as Layer 2 solutions, there is no reason not to switch to IOTA.
there are probably even more who will reach out once we've achieved our objective of being production ready.

Our objective is to have Honey ready within the first half of 2021.
we are very confident that Coordicide will happen in time.

For Chrysalis, we will implement a deposit system. In order for an address to receive dust (which will be explicitly defined as any output with value less than a certain threshold), that address must already have a minimum balance (either 1 MIota or 1 KIota). The total ordering in conflict white flag makes this solution incredibly easy to implement.
this solution in the Coordicide needs alterations, because of the lack of total ordering.

Sharding is part of IOTA 3.0 and currently still in research.
there are of course some hard questions that need to be answered but we are pretty confident that these questions can and will be answered.

**Having these layers helps keep the protocol modular and organized.****Indeed,****it is important to be able to track dependencies between the modules, particularly for standardization purposes.As your question suggests, a key component of standardization is the ability to update the standard(no living protocol is completely static).**Standardization will be accompanied by a versioning system, which tracks backwards compatibility.

Well, let me try to clear these things up.
-The congestion control mechanisms are indifferent to the types of messages in the tangle. Thus non-value transactions (data messages) will be processed in the same way as value transactions (value messages). Thus, in times of congestion, a node will require mana in order to issue either of them.
-You will not need mana to simply “set up a node” and monitor the tangle.
However, in order to send transactions (or issue any messages) you will need mana in times of congestion.

**The next big one is next month:**Odyssey Momentum; This is a huge multi-day DLT focussed hackathon with a lot of teams and big companies/governments involved working on solutions for the future. The IOTA Foundation is a Ecosystem member of Odyssey and we will be virtually present during the hackathon to help and guide teams working with IOTA.

Coordicide will not fail. We are working very carefully to make sure that coordicide is a success, and we will not launch Iota 2.0 until it has gone through the proper testing.

Everyone internally and also our partners are very confident in the path that we've defined. Failure is not an option for us :)

We will most probably see a slight delay and see Nectar early 2021 instead.

No, IF is not running out of money, this narrative has been repeated for 3 years now, yet we're still operating. Of course, bear markets impact our theoretical runway, but The IOTA Foundation is hard at work at diversifying revenue streams so that we become less and less dependent on the token holdings.

We are constantly working on getting more exchanges to list IOTA, we however do not pay for listings
Some exchanges require a standard signature scheme
with the introduction of ed25519 in Chrysalis phase 2 that will be introduced and no longer be a restriction.

Being feeless is one of the most important aspects here since a new technology usually only gets adopted if it is either better or easier to use than existing solutions.
if it enables new use cases that would be completely impossible with the existing infrastructure. That is the single biggest reason why I think that IOTA will prevail.
An example for such a "new" use case is the Kupcrush use case presented by Terry

there are so many amazing use cases enabled with IOTA
I would say that****the most specific use cases which gets me really excited is conditional access control based on IOTA payments - in particular for the sharing economy.
IOTA Access + IOTA tokens really enable so many exciting new possibilities.

In fact, with coordicide research coming to an end, we have already started to look into sharding**.**Indeed, sharding will provide the scalability needed to handle the demands of an IoT enabled world.

We have designed Iota 2.0 to not have large concentrations of power. Unlike PoS systems, Iota will not be a block chain and thus will not be limited by a leader election process.
in a DAG, people can information in parallel, and so nodes with small amounts of mana can create messages at the same time with large mana holders.

**In any DLT, "voting" needs a sybil protection system, and thus "voting power" is linked to some scarce resource.****Typically the allocation of any resource follows some sort of Zipf distribution, meaning that some people will have a lot, and others not.**The best we can do is to make sure that the little guys get their fair share of voting power.

With Chrysalis and coordicide we are finally moving to being production ready which will most probably also lead to a bigger market share as partners will start to use the technology which will increase the demand for tokens.

Privacy features are currently not being researched and it might be hard to support that on layer1 but privacy features could definitely be implemented as a 2nd layer solution

We focus on making the base layer of IOTA (namely transactional settlement) as secure and fast as possible. Many of the greater extensions to this core functionality are built on layer 2 (we already have Streams, Access, Identity and now also Smart Contracts)

There are discussions about increasing the supply to be able to still have micro transactions if the token would i.e. cost a few hundred dollars per MIOTA but we have not made a final decision, yet.

We think we have a edge over other technology especially when it comes to fee-less transactions allowing a lot of use-cases that would otherwise be impractical or impossible.Adoption is not a given but a useful technology will be utilized with the right functionality,

**why we have such a widespread strategy of driving IOTA, not only its development but in industry, academia, regulatory circles, raising awareness, funding ecosystem efforts etc.**I am confident in the position we are in right now.
There is a clear demand for financial disruption, data security, and automation.
someone has to assemble a killer application that meets the demand; IF is pushing for this with partners

Our goal is to have at least 1000 TPS.

Personally, I think our congestion control algorithm is our greatest innovation.
our algorithm can be used in any adversarial setting requiring fairness and consistency. Keep an eye out for a blog post that I am writing about it.

about proof of inclusion?
I have started implementing a proof of concept locally and the required data structures and payload types are already done but we won't be able to integrate this into goshimmer until we are done with the current refactoring of the code.

**Many of the changes that are part of the Chrysalis would have made it and will make it into Coordicide.**Like the atomic transactions with binary layout. The approach we took was actually opposite - as in, what are the improvements we can already make in the current network without having to wait for Coordicide, and at the same time without disrupting or delaying Coordicide?

All the key research questions in coordicide have been answered.
in reality, the biggest research challenges are behind us.

When Chrysalis part 2 will be live?
We are still aiming for 2020****as still reflected at **We want to have a testnet where everyone can test things like the new APIs on, and some initial implementations of specific client libraries****to work with.**This will also allow us to test the node (both Hornet and Bee) implementations more in the wild.
The new wallet will also be tested on that testnet.
The whole testing phase will be a big endeavor, and, at the same time, we will also start auditing many of the implementations,

We are in contact currently with OMG, and they are advising us on how to draft our specifications in order to ease the standardization process. Coordicide, or Iota 2.0, actually provides us a chance to start off with a clean state, since we are building it from the ground up with standardization in mind.

The focus at this point is delivering Chrysalis and Coordicide. DeFi could possibly be done with Smart contracts at a given moment but it's not a focus point at this stage.

about price?
We are quite frankly not worried about that. Knowing everything that we have in the pipeline, our ecosystem and how everything around IOTA will mature over the next few months, I am sure that the entire crypto ecosystem will wake up to IOTA and its potential. **Many participants in the market still have outdated information from 2017 about us, so there is certainly some education to do.**But with Chrysalis and the Coordicide progress, all of that will change.

At the core of it, the IOTA Foundation is a leader in trust protocols and digital infrastructure.We will always remain a R&D organization at our core, as there is a lot more development we can lead when it comes to make our society and economy more fair, trustless and autonomous.
I certainly see us evolving into a broader think-tank and expert group to advise governments and large corporations on their strategies - in particular around data, identity and IoT.

barely any cryptocurrency gets used in the real world.
IOTA will soon start to actually be used in real world products and it is likely that this will also have an impact on the price (but I can't really give any details just yet).

ISCP (IOTA Smart Contract Protocol) is based on cryptographic consensus via BLS threshold signatures. That means a certain pre-defined amount of key holders have to come together to alter the state of the contract****or to send funds around. **If majority of the nodes are offline, the threshold will not be reached and the contract cannot be executed anymore.**There are various ways in how we are looking at this right now on how to make SC recovery and easy transitions possible.
**The beauty of ISCP is that we have a validator set which you can define (can be 3 or it can be 100+), and via an open selection process we can really ensure that the network will be fully decentralized and permissionless.Every smart contract committee (which will be its own network of course) is leveraging the IOTA ledger for security and to make it fully auditable and tamper-proof.**Which means that if a committee acts wrong, we have cryptographic proof of it and can take certain actions.
This makes our approach to smart contracts very elegant, secure and scalable.

No, we will not standardize Iota 1.5. Yes, we do hope that standardization will help adoption by making it easier for corporates to learn our tech.

In general, I also have to add that I'm really impressed by the force of our research department, and I think we have the necessary abilities to handle all future challenges that we might be facing.

In coordicide, i.e. Iota 2.0, yes all nodes have to process all transactions and must receive all data. Our next major project is sharding, i.e. Iota 3.0 which will remove this requirement, and increase scalability.
FPC begins to be vulnerable to attack if the attacker has 30%-40% of the active consensus mana.

There is no doubt about coordicide working as envisioned.

When will companies fully implement iota tech?
Soon(TM) :P

Well first, we are going to make sure that we dont need a plan B :) Second, our plans for the actual deployment are still under discussion. Lastly, we will make sure there is some sort of fail safe, e.g. turning the coordinator back on, or something like that.

All the key research questions in coordicide have been answered, and each module is designed.

What will be standardized is the behavior of the modules, particularly their interactions with other nodes and wallets. Implementation details will not be standardized. The standardization will allow anyone to build a node that can run on the IOTA 2.0 network.

Tangle EE has its own Slack (private) and calls, so the lack of activity can probably be explained in that fashion. Coordicide will have an impact on all of IOTA :) There's certainly a lot of entities awaiting it, but most will start building already with Chrysalis v2, since it solves most pain points.

If there are no conflicts, a message will be confirmed if it receives some approvals. We estimate that this should happen within 10-20 seconds.
To resolve a conflict, FPC will typically take another 4 minutes, according to our simulator. Since conflicts will not affect honest users, most transactions will have very short confirmation times.

a colored coin supply cannot exceed that of all Iota. You could effectively mint a colored coin supply using a smart contract, although there would be performance downsides. There are no plans to increase the supply. The convergence to binary will not affect the supply nor anyone's balances.

Both, Radix and Avalanche have some similarities to IOTA:
- Avalanche has a similar voting scheme and also uses a DAG
- Radix uses a sharding approach that is similar to IOTAs "fluid sharding"
I don't really consider them to compete with our vision since both projects still rely on fees to make the network work.
Centralized solution can however never be feeless and being feeless is not just a "nice feature" but absolutely crucial for DLT to succeed in the real world.
Having fees makes things a lot easier and Coordicide would already be "done" if we could just use fees but I really believe that it is worth "going the extra mile" and build a system that is able to be better than existing tech.
submitted by btlkhs to Iota [link] [comments]

First Time Going Through Coding Interviews?

This post draws on my personal experiences and challenges over the past term at school, which I entered with hardly any knowledge of DSA (data structures and algorithms) and problem-solving strategies. As a self-taught programmer, I was a lot more familiar and comfortable with general programming, such as object-oriented programming, than with the problem-solving skills required in DSA questions.
This post reflects my journey throughout the term and the resources I turned to in order to quickly improve for my coding interview.
Here're some common questions and answers
What's the interview process like at a tech company?
Good question. It's actually pretty different from most other companies.

(What It's Like To Interview For A Coding Job

First time interviewing for a tech job? Not sure what to expect? This article is for you.

Here are the usual steps:

  1. First, you’ll do a non-technical phone screen.
  2. Then, you’ll do one or a few technical phone interviews.
  3. Finally, the last step is an onsite interview.
Some companies also throw in a take-home code test—sometimes before the technical phone interviews, sometimes after.
Let’s walk through each of these steps.

The non-technical phone screen

This first step is a quick call with a recruiter—usually just 10–20 minutes. It's very casual.
Don’t expect technical questions. The recruiter probably won’t be a programmer.
The main goal is to gather info about your job search. Stuff like:

  1. Your timeline. Do you need to sign an offer in the next week? Or are you trying to start your new job in three months?
  2. What’s most important to you in your next job. Great team? Flexible hours? Interesting technical challenges? Room to grow into a more senior role?
  3. What stuff you’re most interested in working on. Front end? Back end? Machine learning?
Be honest about all this stuff—that’ll make it easier for the recruiter to get you what you want.
One exception to that rule: If the recruiter asks you about your salary expectations on this call, best not to answer. Just say you’d rather talk about compensation after figuring out if you and the company are a good fit. This’ll put you in a better negotiating position later on.

The technical phone interview(s)

The next step is usually one or more hour-long technical phone interviews.
Your interviewer will call you on the phone or tell you to join them on Skype or Google Hangouts. Make sure you can take the interview in a quiet place with a great internet connection. Consider grabbing a set of headphones with a good microphone or a bluetooth earpiece. Always test your hardware beforehand!
The interviewer will want to watch you code in real time. Usually that means using a web-based code editor like Coderpad or collabedit. Run some practice problems in these tools ahead of time, to get used to them. Some companies will just ask you to share your screen through Google Hangouts or Skype.
Turn off notifications on your computer before you get started—especially if you’re sharing your screen!
Technical phone interviews usually have three parts:

  1. Beginning chitchat (5–10 minutes)
  2. Technical challenges (30–50 minutes)
  3. Your turn to ask questions (5–10 minutes)
The beginning chitchat is half just to help your relax, and half actually part of the interview. The interviewer might ask some open-ended questions like:

  1. Tell me about yourself.
  2. Tell me about something you’ve built that you’re particularly proud of.
  3. I see this project listed on your resume—tell me more about that.
You should be able to talk at length about the major projects listed on your resume. What went well? What didn’t? How would you do things differently now?
Then come the technical challenges—the real meet of the interview. You’ll spend most of the interview on this. You might get one long question, or several shorter ones.
What kind of questions can you expect? It depends.
Startups tend to ask questions aimed towards building or debugging code. (“Write a function that takes two rectangles and figures out if they overlap.”). They’ll care more about progress than perfection.
Larger companies will want to test your general know-how of data structures and algorithms (“Write a function that checks if a binary tree is ‘balanced’ in O(n)O(n) ↴ time.”). They’ll care more about how you solve and optimize a problem.
With these types of questions, the most important thing is to be communicating with your interviewer throughout. You'll want to "think out loud" as you work through the problem. For more info, check out our more detailed step-by-step tips for coding interviews.
If the role requires specific languages or frameworks, some companies will ask trivia-like questions (“In Python, what’s the ‘global interpreter lock’?”).
After the technical questions, your interviewer will open the floor for you to ask them questions. Take some time before the interview to comb through the company’s website. Think of a few specific questions about the company or the role. This can really make you stand out.
When you’re done, they should give you a timeframe on when you’ll hear about next steps. If all went well, you’ll either get asked to do another phone interview, or you’ll be invited to their offices for an onsite.

The onsite interview

An onsite interview happens in person, at the company’s office. If you’re not local, it’s common for companies to pay for a flight and hotel room for you.
The onsite usually consists of 2–6 individual, one-on-one technical interviews (usually in a small conference room). Each interview will be about an hour and have the same basic form as a phone screen—technical questions, bookended by some chitchat at the beginning and a chance for you to ask questions at the end.
The major difference between onsite technical interviews and phone interviews though: you’ll be coding on a whiteboard.
This is awkward at first. No autocomplete, no debugging tools, no delete button…ugh. The good news is, after some practice you get used to it. Before your onsite, practice writing code on a whiteboard (in a pinch, a pencil and paper are fine). Some tips:

  1. Start in the top-most left corner of the whiteboard. This gives you the most room. You’ll need more space than you think.
  2. Leave a blank line between each line as you write your code. Makes it much easier to add things in later.
  3. Take an extra second to decide on your variable names. Don’t rush this part. It might seem like a waste of time, but using more descriptive variable names ultimately saves you time because it makes you less likely to get confused as you write the rest of your code.
If a technical phone interview is a sprint, an onsite is a marathon. The day can get really long. Best to keep it open—don’t make other plans for the afternoon or evening.
When things go well, you’ wrap-up by chatting with the CEO or some other director. This is half an interview, half the company trying to impress you. They may invite you to get drinks with the team after hours.
All told, a long day of onsite interviews could look something like this:

If they let you go after just a couple interviews, it’s usually a sign that they’re going to pass on you. That’s okay—it happens!
There are are a lot of easy things you can do the day before and morning of your interview to put yourself in the best possible mindset. Check out our piece on what to do in the 24 hours before your onsite coding interview.

The take-home code test

Code tests aren’t ubiquitous, but they seem to be gaining in popularity. They’re far more common at startups, or places where your ability to deliver right away is more important than your ability to grow.
You’ll receive a description of an app or service, a rough time constraint for writing your code, and a deadline for when to turn it in. The deadline is usually negotiable.
Here's an example problem:
Write a basic “To-Do” app. Unit test the core functionality. As a bonus, add a “reminders” feature. Try to spend no more than 8 hours on it, and send in what you have by Friday with a small write-up.
Take a crack at the “bonus” features if they include any. At the very least, write up how you would implement it.
If they’re hiring for people with knowledge of a particular framework, they might tell you what tech to use. Otherwise, it’ll be up to you. Use what you’re most comfortable with. You want this code to show you at your best.
Some places will offer to pay you for your time. It's rare, but some places will even invite you to work with them in their office for a few days, as a "trial.")
Do I need to know this "big O" stuff?
Big O notation is the language we use for talking about the efficiency of data structures and algorithms.
Will it come up in your interviews? Well, it depends. There are different types of interviews.
There’s the classic algorithmic coding interview, sometimes called the “Google-style whiteboard interview.” It’s focused on data structures and algorithms (queues and stacks, binary search, etc).
That’s what our full course prepares you for. It's how the big players interview. Google, Facebook, Amazon, Microsoft, Oracle, LinkedIn, etc.
For startups and smaller shops, it’s a mixed bag. Most will ask at least a few algorithmic questions. But they might also include some role-specific stuff, like Java questions or SQL questions for a backend web engineer. They’ll be especially interested in your ability to ship code without much direction. You might end up doing a code test or pair-programming exercise instead of a whiteboarding session.
To make sure you study for the right stuff, you should ask your recruiter what to expect. Send an email with a question like, “Is this interview going to cover data structures and algorithms? Or will it be more focused around coding in X language.” They’ll be happy to tell you.
If you've never learned about data structures and algorithms, or you're feeling a little rusty, check out our Intuitive Guide to Data Structures and Algorithms.
Which programming language should I use?
Companies usually let you choose, in which case you should use your most comfortable language. If you know a bunch of languages, prefer one that lets you express more with fewer characters and fewer lines of code, like Python or Ruby. It keeps your whiteboard cleaner.
Try to stick with the same language for the whole interview, but sometimes you might want to switch languages for a question. E.g., processing a file line by line will be far easier in Python than in C++.
Sometimes, though, your interviewer will do this thing where they have a pet question that’s, for example, C-specific. If you list C on your resume, they’ll ask it.
So keep that in mind! If you’re not confident with a language, make that clear on your resume. Put your less-strong languages under a header like ‘Working Knowledge.’
What should I wear?
A good rule of thumb is to dress a tiny step above what people normally wear to the office. For most west coast tech companies, the standard digs are just jeans and a t-shirt. Ask your recruiter what the office is like if you’re worried about being too casual.
Should I send a thank-you note?
Thank-you notes are nice, but they aren’t really expected. Be casual if you send one. No need for a hand-calligraphed note on fancy stationery. Opt for a short email to your recruiter or the hiring manager. Thank them for helping you through the process, and ask them to relay your thanks to your interviewers.
1) Coding Interview Tips
How to get better at technical interviews without practicing
Chitchat like a pro.
Before diving into code, most interviewers like to chitchat about your background. They're looking for:

You should have at least one:

Nerd out about stuff. Show you're proud of what you've done, you're amped about what they're doing, and you have opinions about languages and workflows.
Once you get into the coding questions, communication is key. A candidate who needed some help along the way but communicated clearly can be even better than a candidate who breezed through the question.
Understand what kind of problem it is. There are two types of problems:

  1. Coding. The interviewer wants to see you write clean, efficient code for a problem.
  2. Chitchat. The interviewer just wants you to talk about something. These questions are often either (1) high-level system design ("How would you build a Twitter clone?") or (2) trivia ("What is hoisting in Javascript?"). Sometimes the trivia is a lead-in for a "real" question e.g., "How quickly can we sort a list of integers? Good, now suppose instead of integers we had . . ."
If you start writing code and the interviewer just wanted a quick chitchat answer before moving on to the "real" question, they'll get frustrated. Just ask, "Should we write code for this?"
Make it feel like you're on a team. The interviewer wants to know what it feels like to work through a problem with you, so make the interview feel collaborative. Use "we" instead of "I," as in, "If we did a breadth-first search we'd get an answer in O(n)O(n) time." If you get to choose between coding on paper and coding on a whiteboard, always choose the whiteboard. That way you'll be situated next to the interviewer, facing the problem (rather than across from her at a table).
Think out loud. Seriously. Say, "Let's try doing it this way—not sure yet if it'll work." If you're stuck, just say what you're thinking. Say what might work. Say what you thought could work and why it doesn't work. This also goes for trivial chitchat questions. When asked to explain Javascript closures, "It's something to do with scope and putting stuff in a function" will probably get you 90% credit.
Say you don't know. If you're touching on a fact (e.g., language-specific trivia, a hairy bit of runtime analysis), don't try to appear to know something you don't. Instead, say "I'm not sure, but I'd guess $thing, because...". The because can involve ruling out other options by showing they have nonsensical implications, or pulling examples from other languages or other problems.
Slow the eff down. Don't confidently blurt out an answer right away. If it's right you'll still have to explain it, and if it's wrong you'll seem reckless. You don't win anything for speed and you're more likely to annoy your interviewer by cutting her off or appearing to jump to conclusions.
Get unstuck.
Sometimes you'll get stuck. Relax. It doesn't mean you've failed. Keep in mind that the interviewer usually cares more about your ability to cleverly poke the problem from a few different angles than your ability to stumble into the correct answer. When hope seems lost, keep poking.
Draw pictures. Don't waste time trying to think in your head—think on the board. Draw a couple different test inputs. Draw how you would get the desired output by hand. Then think about translating your approach into code.
Solve a simpler version of the problem. Not sure how to find the 4th largest item in the set? Think about how to find the 1st largest item and see if you can adapt that approach.
Write a naive, inefficient solution and optimize it later. Use brute force. Do whatever it takes to get some kind of answer.
Think out loud more. Say what you know. Say what you thought might work and why it won't work. You might realize it actually does work, or a modified version does. Or you might get a hint.
Wait for a hint. Don't stare at your interviewer expectantly, but do take a brief second to "think"—your interviewer might have already decided to give you a hint and is just waiting to avoid interrupting.
Think about the bounds on space and runtime. If you're not sure if you can optimize your solution, think about it out loud. For example:

Get your thoughts down.
It's easy to trip over yourself. Focus on getting your thoughts down first and worry about the details at the end.
Call a helper function and keep moving. If you can't immediately think of how to implement some part of your algorithm, big or small, just skip over it. Write a call to a reasonably-named helper function, say "this will do X" and keep going. If the helper function is trivial, you might even get away with never implementing it.
Don't worry about syntax. Just breeze through it. Revert to English if you have to. Just say you'll get back to it.
Leave yourself plenty of room. You may need to add code or notes in between lines later. Start at the top of the board and leave a blank line between each line.
Save off-by-one checking for the end. Don't worry about whether your for loop should have "<<" or "<=<=." Write a checkmark to remind yourself to check it at the end. Just get the general algorithm down.
Use descriptive variable names. This will take time, but it will prevent you from losing track of what your code is doing. Use names_to_phone_numbers instead of nums. Imply the type in the name. Functions returning booleans should start with "is_*". Vars that hold a list should end with "s." Choose standards that make sense to you and stick with them.
Clean up when you're done.
Walk through your solution by hand, out loud, with an example input. Actually write down what values the variables hold as the program is running—you don't win any brownie points for doing it in your head. This'll help you find bugs and clear up confusion your interviewer might have about what you're doing.
Look for off-by-one errors. Should your for loop use a "<=<=" instead of a "<<"?
Test edge cases. These might include empty sets, single-item sets, or negative numbers. Bonus: mention unit tests!
Don't be boring. Some interviewers won't care about these cleanup steps. If you're unsure, say something like, "Then I'd usually check the code against some edge cases—should we do that next?"
In the end, there's no substitute for running practice questions.
Actually write code with pen and paper. Be honest with yourself. It'll probably feel awkward at first. Good. You want to get over that awkwardness now so you're not fumbling when it's time for the real interview.

2) Tricks For Getting Unstuck During a Coding Interview
Getting stuck during a coding interview is rough.
If you weren’t in an interview, you might take a break or ask Google for help. But the clock is ticking, and you don’t have Google.
You just have an empty whiteboard, a smelly marker, and an interviewer who’s looking at you expectantly. And all you can think about is how stuck you are.
You need a lifeline for these moments—like a little box that says “In Case of Emergency, Break Glass.”
Inside that glass box? A list of tricks for getting unstuck. Here’s that list of tricks.
When you’re stuck on getting started
1) Write a sample input on the whiteboard and turn it into the correct output "by hand." Notice the process you use. Look for patterns, and think about how to implement your process in code.
Trying to reverse a string? Write “hello” on the board. Reverse it “by hand”—draw arrows from each character’s current position to its desired position.
Notice the pattern: it looks like we’re swapping pairs of characters, starting from the outside and moving in. Now we’re halfway to an algorithm.
2) Solve a simpler version of the problem. Remove or simplify one of the requirements of the problem. Once you have a solution, see if you can adapt that approach for the original question.
Trying to find the k-largest element in a set? Walk through finding the largest element, then the second largest, then the third largest. Generalizing from there to find the k-largest isn’t so bad.
3) Start with an inefficient solution. Even if it feels stupidly inefficient, it’s often helpful to start with something that’ll return the right answer. From there, you just have to optimize your solution. Explain to your interviewer that this is only your first idea, and that you suspect there are faster solutions.
Suppose you were given two lists of sorted numbers and asked to find the median of both lists combined. It’s messy, but you could simply:

  1. Concatenate the arrays together into a new array.
  2. Sort the new array.
  3. Return the value at the middle index.
Notice that you could’ve also arrived at this algorithm by using trick (2): Solve a simpler version of the problem. “How would I find the median of one sorted list of numbers? Just grab the item at the middle index. Now, can I adapt that approach for getting the median of two sorted lists?”
When you’re stuck on finding optimizations
1) Look for repeat work. If your current solution goes through the same data multiple times, you’re doing unnecessary repeat work. See if you can save time by looking through the data just once.
Say that inside one of your loops, there’s a brute-force operation to find an element in an array. You’re repeatedly looking through items that you don’t have to. Instead, you could convert the array to a lookup table to dramatically improve your runtime.
2) Look for hints in the specifics of the problem. Is the input array sorted? Is the binary tree balanced? Details like this can carry huge hints about the solution. If it didn’t matter, your interviewer wouldn’t have brought it up. It’s a strong sign that the best solution to the problem exploits it.
Suppose you’re asked to find the first occurrence of a number in a sorted array. The fact that the array is sorted is a strong hint—take advantage of that fact by using a binary search.

Sometimes interviewers leave the question deliberately vague because they want you to ask questions to unearth these important tidbits of context. So ask some questions at the beginning of the problem.
3) Throw some data structures at the problem. Can you save time by using the fast lookups of a hash table? Can you express the relationships between data points as a graph? Look at the requirements of the problem and ask yourself if there’s a data structure that has those properties.
4) Establish bounds on space and runtime. Think out loud about the parameters of the problem. Try to get a sense for how fast your algorithm could possibly be:

When All Else Fails
1) Make it clear where you are. State what you know, what you’re trying to do, and highlight the gap between the two. The clearer you are in expressing exactly where you’re stuck, the easier it is for your interviewer to help you.
2) Pay attention to your interviewer. If she asks a question about something you just said, there’s probably a hint buried in there. Don’t worry about losing your train of thought—drop what you’re doing and dig into her question.
Relax. You’re supposed to get stuck.
Interviewers choose hard problems on purpose. They want to see how you poke at a problem you don’t immediately know how to solve.
Seriously. If you don’t get stuck and just breeze through the problem, your interviewer’s evaluation might just say “Didn’t get a good read on candidate’s problem-solving process—maybe she’d already seen this interview question before?”
On the other hand, if you do get stuck, use one of these tricks to get unstuck, and communicate clearly with your interviewer throughout...that’s how you get an evaluation like, “Great problem-solving skills. Hire.”

3) Fixing Impostor Syndrome in Coding Interviews
“It's a fluke that I got this job interview...”
“I studied for weeks, but I’m still not prepared...”
“I’m not actually good at this. They’re going to see right through me...”
If any of these thoughts resonate with you, you're not alone. They are so common they have a name: impostor syndrome.
It’s that feeling like you’re on the verge of being exposed for what you really are—an impostor. A fraud.
Impostor syndrome is like kryptonite to coding interviews. It makes you give up and go silent.
You might stop asking clarifying questions because you’re afraid they’ll sound too basic. Or you might neglect to think out loud at the whiteboard, fearing you’ll say something wrong and sound incompetent.
You know you should speak up, but the fear of looking like an impostor makes that really, really hard.
Here’s the good news: you’re not an impostor. You just feel like an impostor because of some common cognitive biases about learning and knowledge.
Once you understand these cognitive biases—where they come from and how they work—you can slowly fix them. You can quiet your worries about being an impostor and keep those negative thoughts from affecting your interviews.

Everything you could know

Here’s how impostor syndrome works.
Software engineering is a massive field. There’s a huge universe of things you could know. Huge.
In comparison to the vast world of things you could know, the stuff you actually know is just a tiny sliver:
That’s the first problem. It feels like you don’t really know that much, because you only know a tiny sliver of all the stuff there is to know.

The expanding universe

It gets worse: counterintuitively, as you learn more, your sliver of knowledge feels like it's shrinking.
That's because you brush up against more and more things you don’t know yet. Whole disciplines like machine learning, theory of computation, and embedded systems. Things you can't just pick up in an afternoon. Heavy bodies of knowledge that take months to understand.
So the universe of things you could know seems to keep expanding faster and faster—much faster than your tiny sliver of knowledge is growing. It feels like you'll never be able to keep up.

What everyone else knows

Here's another common cognitive bias: we assume that because something is easy for us, it must be easy for everyone else. So when we look at our own skills, we assume they're not unique. But when we look at other people's skills, we notice the skills they have that we don't have.
The result? We think everyone’s knowledge is a superset of our own:
This makes us feel like everyone else is ahead of us. Like we're always a step behind.
But the truth is more like this:
There's a whole area of stuff you know that neither Aysha nor Bruno knows. An area you're probably blind to, because you're so focused on the stuff you don't know.

We’ve all had flashes of realizing this. For me, it was seeing the back end code wizard on my team—the one that always made me feel like an impostor—spend an hour trying to center an image on a webpage.

It's a problem of focus

Focusing on what you don't know causes you to underestimate what you do know. And that's what causes impostor syndrome.
By looking at the vast (and expanding) universe of things you could know, you feel like you hardly know anything.
And by looking at what Aysha and Bruno know that you don't know, you feel like you're a step behind.
And interviews make you really focus on what you don't know. You focus on what could go wrong. The knowledge gaps your interviewers might find. The questions you might not know how to answer.
But remember:
Just because Aysha and Bruno know some things you don't know, doesn't mean you don't also know things Aysha and Bruno don't know.
And more importantly, everyone's body of knowledge is just a teeny-tiny sliver of everything they could learn. We all have gaps in our knowledge. We all have interview questions we won't be able to answer.
You're not a step behind. You just have a lot of stuff you don't know yet. Just like everyone else.

4) The 24 Hours Before Your Interview

Feeling anxious? That’s normal. Your body is telling you you’re about to do something that matters.

The twenty-four hours before your onsite are about finding ways to maximize your performance. Ideally, you wanna be having one of those days, where elegant code flows effortlessly from your fingertips, and bugs dare not speak your name for fear you'll squash them.
You need to get your mind and body in The Zone™ before you interview, and we've got some simple suggestions to help.
5) Why You're Hitting Dead Ends In Whiteboard Interviews

The coding interview is like a maze

Listening vs. holding your train of thought

Finally! After a while of shooting in the dark and frantically fiddling with sample inputs on the whiteboard, you've came up with an algorithm for solving the coding question your interviewer gave you.
Whew. Such a relief to have a clear path forward. To not be flailing anymore.
Now you're cruising, getting ready to code up your solution.
When suddenly, your interviewer throws you a curve ball.
"What if we thought of the problem this way?"
You feel a tension we've all felt during the coding interview:
"Try to listen to what they're saying...but don't lose your train of thought...ugh, I can't do both!"
This is a make-or-break moment in the coding interview. And so many people get it wrong.
Most candidates end up only half understanding what their interviewer is saying. Because they're only half listening. Because they're desperately clinging to their train of thought.
And it's easy to see why. For many of us, completely losing track of what we're doing is one of our biggest coding interview fears. So we devote half of our mental energy to clinging to our train of thought.
To understand why that's so wrong, we need to understand the difference between what we see during the coding interview and what our interviewer sees.

The programming interview maze

Working on a coding interview question is like walking through a giant maze.
You don't know anything about the shape of the maze until you start wandering around it. You might know vaguely where the solution is, but you don't know how to get there.
As you wander through the maze, you might find a promising path (an approach, a way to break down the problem). You might follow that path for a bit.
Suddenly, your interviewer suggests a different path:
But from what you can see so far of the maze, your approach has already gotten you halfway there! Losing your place on your current path would mean a huge step backwards. Or so it seems.
That's why people hold onto their train of thought instead of listening to their interviewer. Because from what they can see, it looks like they're getting somewhere!
But here's the thing: your interviewer knows the whole maze. They've asked this question 100 times.

I'm not exaggerating: if you interview candidates for a year, you can easily end up asking the same question over 100 times.
So if your interviewer is suggesting a certain path, you can bet it leads to an answer.
And your seemingly great path? There's probably a dead end just ahead that you haven't seen yet:
Or it could just be a much longer route to a solution than you think it is. That actually happens pretty often—there's an answer there, but it's more complicated than you think.

Hitting a dead end is okay. Failing to listen is not.

Your interviewer probably won't fault you for going down the wrong path at first. They've seen really smart engineers do the same thing. They understand it's because you only have a partial view of the maze.
They might have let you go down the wrong path for a bit to see if you could keep your thinking organized without help. But now they want to rush you through the part where you discover the dead end and double back. Not because they don't believe you can manage it yourself. But because they want to make sure you have enough time to finish the question.
But here's something they will fault you for: failing to listen to them. Nobody wants to work with an engineer who doesn't listen.
So when you find yourself in that crucial coding interview moment, when you're torn between holding your train of thought and considering the idea your interviewer is suggesting...remember this:
Listening to your interviewer is the most important thing.
Take what they're saying and run with it. Think of the next steps that follow from what they're saying.
Even if it means completely leaving behind the path you were on. Trust the route your interviewer is pointing you down.
Because they can see the whole maze.
6) How To Get The Most Out Of Your Coding Interview Practice Sessions
When you start practicing for coding interviews, there’s a lot to cover. You’ll naturally wanna brush up on technical questions. But how you practice those questions will make a big difference in how well you’re prepared.
Here’re a few tips to make sure you get the most out of your practice sessions.
Track your weak spots
One of the hardest parts of practicing is knowing what to practice. Tracking what you struggle with helps answer that question.
So grab a fresh notebook. After each question, look back and ask yourself, “What did I get wrong about this problem at first?” Take the time to write down one or two things you got stuck on, and what helped you figure them out. Compare these notes to our tips for getting unstuck.
After each full practice session, read through your entire running list. Read it at the beginning of each practice session too. This’ll add a nice layer of rigor to your practice, so you’re really internalizing the lessons you’re learning.
Use an actual whiteboard
Coding on a whiteboard is awkward at first. You have to write out every single character, and you can’t easily insert or delete blocks of code.
Use your practice sessions to iron out that awkwardness. Run a few problems on a piece of paper or, if you can, a real whiteboard. A few helpful tips for handwriting code:

Set a timer
Get a feel for the time pressure of an actual interview. You should be able to finish a problem in 30–45 minutes, including debugging your code at the end.
If you’re just starting out and the timer adds too much stress, put this technique on the shelf. Add it in later as you start to get more comfortable with solving problems.
Think out loud
Like writing code on a whiteboard, this is an acquired skill. It feels awkward at first. But your interviewer will expect you to think out loud during the interview, so you gotta power through that awkwardness.
A good trick to get used to talking out loud: Grab a buddy. Another engineer would be great, but you can also do this with a non-technical friend.
Have your buddy sit in while you talk through a problem. Better yet—try loading up one of our questions on an iPad and giving that to your buddy to use as a script!
Set aside a specific time of day to practice.
Give yourself an hour each day to practice. Commit to practicing around the same time, like after you eat dinner. This helps you form a stickier habit of practicing.
Prefer small, daily doses of practice to doing big cram sessions every once in a while. Distributing your practice sessions helps you learn more with less time and effort in the long run.
part -2 will be upcoming in another post !
submitted by Cyberrockz to u/Cyberrockz [link] [comments]

How to prevent customer cancellations

Customer retention is a goal every business owner should be obsessed with. At the end of the day it's cheaper to retain an existing customer than it is to acquire a new one.
But how do you ensure that your customers keep using your service?
Are there any simple, yet effective ways to reduce or even prevent churn?
As it turns out there's one simple strategy you can use to keep your customers around even if they're about to leave your platform. Let's explore what it is and why it works.

Why you should obsess over customer retention

As already stated in the introduction it's important to focus on customer retention when building a sustainable business.
Acquiring customers can be an expensive endeavour. If you're not (yet) in a position where your product grows through Word-of-Mouth you're likely spending a good portion of your revenue on paid ads and marketing to drive traffic to your service. Only a few of your thousands of visitors will eventually try your product and convert to become a paying customer.
Optimizing this marketing and sales funnel is a tricky and costly activity. Think about it for a minute. Who finances your learnings and tweakings of such funnel? Correct, your existing customers.
That's why keeping your users happy and around is one of the most important business objectives.

Why customers are churning

If you think about it, there's really only one reason why your customers are leaving your platform:
Your product isn't a crucial part of their life anymore
While this sounds harsh I'd like you to think about all the services you're currently subscribing to. Now imagine that you can only keep one. What would you cancel? Probably everything except the one you can't live without.
Of course, the preferences are different from person to person and they change over time. And that's the exact reason why people cancel their subscription with your service: Their preferences have changed and they might want to take a pause from your service or need something else entirely.

"Churn Baby Churn"

Now that we know why your customers churn, it's time to get into their shoes and think about ways to keep them around.
One of the "industry" standards is to send out a survey once they're about to leave to gather feedback and convince them to stay. Some services offer coupon codes if for example the user has clicked on the "it's too expensive" option in the survey.
Other tactics are more on the "dark patterns" side of things. Hiding buttons, asking double negative questions or using other techniques to make it nearly impossible to leave. Needless to say that customers of businesses practicing such tactics aren't the ones who spread the word on how awesome the product is. Quite the opposite.
But let's take a step back for a minute and ask ourselves why this "should I stay or should I go" question has to be binary in the first place. Isn't there something "right in the middle"? Something where a user can stay but somehow go at the same time?

"Wait a minute... or a month..."

The solution to this dilemma is dead simple and obvious, yet rarely used: Make it possible to pause the subscription.
Yes, it's that simple. Just offer a way to pause a subscription and get back to it once your users current circumstances have changed.
Now you might think that it's a really bad idea to let users pause their subscription. They'll pause and never come back. So essentially it's a "passive churn" as they haven't left the platform yet but might never use it again. The stale user data is sitting in the database and your dashboards are still showing hockey-stick growth. Furthermore it's a huge implementation effort as pausing and resuming subscriptions isn't something considered business critical and hence wasn't implemented just yet.
Those are all valid concerns and some of them might turn out to be true even if you have a "pause- and resume your subscription" system in place. But let's take a seconds to look at the other side of the equation.

Why pausing is a good idea

They very first thing that comes to mind is the COVID-19 pandemic we're currently in. A lot of business scaled back and hence had to cancel subscriptions to their favorite SaaS tools to cut costs. A common "save the customer tactic" used here was to get in touch with the business owner and offer heavy discounted year long subscription plans. That way businesses could reassess if they should really quit and leave the huge discount on the table or just go with it and double down to benefit from the sweet, discounted multi-year subscription deal.
Letting business put their subscription on hold would be another strategy that could be used to help retain and eventually reactivate your users during this pandemic. Put yourself into your customers shoes again for a minute. Wouldn't you want to pay it back in the future if your supplier lent you a helping hand and wasn't "forcing" you out the door?
Even if your customers pause their account you still have their E-Mail address to reach out to them and keep them informed about your product. In fact you should use this opportunity to stay in touch, ask them how they're doing and providing something of value along the way. That way you keep the communication "warm" and your business stays on "their radar". There's a higher likelihood that they think about your service when times have changed and they're about to scale things up again.
Having a way to pause a subscription is an action that's usually taken with some level of consideration. If your customer wants to quit (s)he'll just cancel the subscription anyway. Offering a way to pause for the time-being is another option your users might just not have right now, so they're forced to make a very binary decision and therefore they just quit.
What you should also think about is that pausing a subscription doesn't necessarily mean that you'll lose revenue for sure. There are different and very creative ways in which you can implement the pause. My gym for example simply extends my membership for the amount of months I put my membership on hold. In the summer I make use of this feature since I do my workouts outside anyways. However those 3-4 months I "save" are simply "added" to my contract. I just have a little bit more control about how and where I spend my time with sports. You can get really creative here and invent other ways for this mechanism to work if you really want to ensure that you don't lose revenue.
A last, important point is that you can use this functionality as a competitive advantage and "marketing material". Be sure to add the fact that people can pause their subscription to your list of product benefits. Add it to the copy right next to your "Subscribe Now" button. Addressing objections and concerns right before the call-to-action is about to happen will drastically increase your conversion rates.

Things to keep in mind when going down that path

Now you might be excited and eager to implement this strategy in the near future but before you do so I'd like to call out a couple of things you should keep in mind when implementing it.
First of all: Keep it simple. There's no need to jump right into code and implement this functionality end-to-end. Do it manually in the beginning. Update the database records and the subscription plans for people who want to pause their subscription by hand. Maybe you find out that very few people want to make use of this feature. What you definitely want to put in place is your new copywriting. As discussed above you should ensure that your marketing website is updated and reflects the recent change you just introduced.
Next up you want to have an automated follow-up E-Mail sequence / Drip campaign setup for pausing customers. Keep in touch. Ask for problems they had with your software and help them succeed in whatever they're up to right now. You might want to jump on a quick call to gather some feedback as to why they paused and understand what needs to be in place for them to come back. If you do this, please ensure that you're genuinely interested in the communication. There's nothing worse for a user than composing a reply and shooting the E-Mail into the marketing void.
A very important, yet often overlooked step is to have a tool in place which deals with "passive churn". Such a system ensures that the credit cards on file are up to date and chargeable. There could be an overlap between your users pausing their subscription and their credit cards expiring. You don't want to make them look bad because of that. You could even think about a "concierge service" which onboards them in person once they'll come back. Combine this with a quick update on all the new features / updates they missed and are not yet familiar with.
Lastly you absolutely don't want to make it hard for your users to pause their subscription. As mentioned above, avoid dark patterns at all costs. And more importantly: Don't penalize them for pausing. Messages such as "We'll retain your data for the next 60 days" are inappropriate in the day and age of "Big Data" and access to Petabytes of storage for a nickel and dime.

Your challenge

I'd like to challenge you to think about adding the possibility to pause a subscription. Is it suitable for your business? Would it help you retain and reactive more customers (especially in the current situation we're in)?
If you're about to add it, keep in mind that it doesn't have to be complicated. Start with a simple E-Mail form your users can fill out to let you know for how long they want to pause. Just make sure that you follow the best practices outlined above and that you advertise that it's now possible for your customers to pause their subscriptions.


Customer retention is one of the most important metrics every business owner should focus on. It's the existing customers who finance the Customer Acquisition Costs that are necessary to bring new users into the door.
It's almost always cheaper to keep your existing customers happy than to lose them and acquire brand new ones.
Unfortunately a lot of SaaS services only offer a very binary option for their subscription plans. As a user you're either in or you're out. You stay or you leave. But what if a user wants to take a pause for a few months because of current changes in life circumstances?
Offering a way to pause a subscription is a simple, yet effective way to retain and eventually reactive your existing customers. Remember that a pause is temporary. If you follow-up with them on a continuous basis and help them succeed they'll eventually come back. Maybe even as a raving, more loyal fan of your brand.
I hope that you enjoyed this article and I'd love to invite you to subscribe to my Newsletter if you're interested in more, action-oriented posts like this.
Do you have any questions, feedback or comments? Feel free to reach out via E-Mail or connect with me on Twitter.
This post was originally published on
submitted by pmuens to indiebiz [link] [comments]

Ethereum on ARM. New Eth2.0 Raspberry pi 4 image for automatically joining Prylabs Onyx Eth2.0 testnet. Step-by-step guide for installing and activating a validator.

TL;DR: Flash your Raspberry Pi 4, plug in an ethernet cable, connect the SSD disk and power up the device to join the Eth2.0 Onyx testnet.
The image takes care of all the necessary steps to join the Eth2.0 Onyx testnet [1], from setting up the environment and formatting the SSD disk to installing and running the Ethereum Eth1.0 and Eth2.0 clients as well as starting the blockchains synchronization (for both Geth Eth1.0 Goerli [2] and Prysm [3] Eth2.0 Beacon Chain).
You will only need to create a validator account, send the deposit of 32 Goerli ETH to the Onyx contract and start the validator systemd service.


You will need and SSD to run the Ethereum clients (without an SSD drive there’s absolutely no chance of syncing the Ethereum blockchain). There are 2 options:
In both cases, avoid getting low quality SSD disks as it is a key component of you node and it can drastically affect the performance (and sync times). Keep in mind that you need to plug the disk to an USB 3.0 port (in blue).
1.- Download the image:
SHA256 13bc7ac4de6e18093b99213511791b2a24b659727b22a8a8d44f583e73a507cc
2.- Flash the image
Insert the microSD in your Desktop / Laptop and download the file:
Note: If you are not comfortable with command line or if you are running Windows, you can use Etcher [8]
Open a terminal and check your MicroSD device name running:
sudo fdisk -l 
You should see a device named mmcblk0 or sdd. Unzip and flash the image:
unzip sudo dd bs=1M if=ubuntu-20.04-preinstalled-server-arm64+raspi.img of=/dev/mmcblk0 conv=fdatasync status=progress 
3.- Insert de MicroSD into the Raspberry Pi 4. Connect an Ethernet cable and attach the USB SSD disk (make sure you are using a blue port).
4.- Power on the device
The Ubuntu OS will boot up in less than one minute but you will need to wait approximately 7 minutes in order to allow the script to perform the necessary tasks to join the Onyx testnet (it will reboot again)
5.- Log in
You can log in through SSH or using the console (if you have a monitor and keyboard attached)
User: ethereum Password: ethereum 
You will be prompted to change the password on first login, so you will need to log in twice.
6.- Forward 30303 and 13000 ports in your router (both UDP and TCP). If you don’t know how to do this, google “port forwarding” followed by your router model.
7.- Getting console output
You can see what’s happening in the background by typing:
sudo tail -f /valog/syslog 
7.- Grafana Dashboards
There are 2 Grafana dashboards to monitor the node (see section “Grafana Dashboards below”.
See [9]

The Onyx Eth2.0 testnet

Onyx is an Eth2.0 testnet created by Prylabs according to the latest official specification for Eth2.0, the v0.12.1 [10] release (which is aimed to be the final).
In order to run an Onyx Eth 2.0 node you will need 3 components:
The image takes care of the Eth1.0 Geth and Eth2.0 Beacon Chain configurations and syncs. So, once flashed (and after a first reboot), Geth (Eth1.0 client) starts syncing the Goerli testnet and the Beacon Chain (Eth2.0 client) gets activated through the Prysm client, both as systemd services.
When the Goerli testnet sync is completed, the Beacon Chain starts syncing. Both chains are necessary as the validator needs to communicate with them (as explained below).
Activating the validator
Once Goerli and the Beacon chain are in sync you have just one task left, configure the Validator for enabling the staking process.
The image provides the Prysm validator client for running the staking process. With this validator, you will create an account with 2 keys (public and private) and get an HEX string that needs to be sent to the Eth 1.0 blockchain as data through a 32 ETH transaction.
The Beacon Chain (which is connected to the Eth1 chain) will detect this deposit (which includes the validator public key) and the Validator will be activated.
So, let’s get started. Geth Goerli testnet and the Beacon Chain are already syncing in the background. Goerli will sync in about 1 hour and the Beacon Chain in about 2 hours (so this will take 3 hours overall).
The easiest way to enable a Prysm validator is to use the Prylabs web portal to get Goerli ETH (testnet ETH) and follow their instructions:
Let’s break this down:
Step 1) Get Prysm
Nothing to do here. Prysm is already installed.
Step 2) Get GöETH — Test ETH
We need 32 ETH to stake (it is fake ETH as this is a tesnet). Prylabs created a faucet with a great UI so you can easily get 32.5 Goerli ETH.
You will need a web3 provider to use the faucet. Install Metamask browser extension (if you don’t have it running yet). Create an account and set the network to “Goerli test network” (on the top of the Metamask screen). Now, click once in “Metamask” and then click “Need GoETH?” button. Confirm the transaction.
Once funded, you will see something like this:
You are 0x0b2eFdbFB8EcaF7F4eCF6853cbd5eaD86510d63C and you have 32.5 GöETH. 
Step 3). Generate a validator public / private key
Go to your Raspberry Pi console and run the following command (make sure you are logged in with your ethereum user):
validator accounts create 
Press return to confirm the default path
Enter a password twice (you will need it later to run the validator so write it down and be careful). Once finished, your account will be created (under the /home/ethereum/.eth2validators directory) containing, among other info, your validator keys. Additionally you will get the deposit data as follows (this is an example):
========================Deposit Data======================= 0x22895118000000000000000000000000000000000000000000000000000000000000008000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001202f06da05b7e399e151f05d910369779ddd5c4c577ed264fd17040a9931b5adf10000000000000000000000000000000000000000000000000000000000000030affc980d9b2c86d1fb1fa70fd95c56dae34efcaa7bf923e020ac8941519065ff70b6b5ba6644e654ba372473b6b5837100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000002000a494d8e641d82ea723bc2f83b40bfd7f752ff7143cf16e57ad6627e99f0e590000000000000000000000000000000000000000000000000000000000000060b69dd0e51e68ddf8b2f5ecbdb8112b23b46dc8c7c7a68185652884b162b8000464847308b165a33aa102a00199e9c0800f53c768376fd88a3ba5f11e6d2eb3b5f6a455b97b4abe953faa270ca6e187db9739e047bf6fd51e02ab49b4ba17d376 =================================================================== ***Enter the above deposit data into step 3 on*** 
Copy this data (just the hexadecimal part, from 0x to the last number), go back to step 3 of Prylabs website and paste it into the field “Your validator deposit data”.
Step 4) Start your beacon chain & validator clients
Beacon chain is already running in the background so let’s configure the validator. Just edit the /etc/ethereum/prysm-validator.conf file and replace “changeme” string with your password (you can use nano or vim editors). Now run:
sudo systemctl enable prysm-validator && sudo systemctl start prysm-validator 
Check if everything went right by running:
sudo systemctl status prysm-validator 
Step 5) Send a validator deposit
We are almost there. Just click the “Make deposit” button and confirm the transaction.
Now you need to wait for the validator to get activated. In time, the beacon chain will detect the 32 ETH deposit (which contains the validator public key) and the system will put your validator in queue. These are the validator status that you will see during the activation process:

Grafana Dashboards

We configured 2 Grafana Dashboards to let users monitor both Eth1.0 and Eth2.0 progress. To access the dashboards just open your browser and type your Raspberry IP followed by the 3000 port:
http://replace_with_your_IP:3000 user: admin passwd: ethereum 
There are 3 dashboards available:
Lot of info here. You can see for example if Geth is in sync by checking (in the Blockchain section) if Headers, Receipts and Blocks are aligned or easily find the validator status.

Whats's next

We are planning a new release for a multi testnet Eth2.0 network including Prysm, Teku and Lighthouse client (and hopefully Nimbus).

Gitcoin Grant

Gitcoin Grants round 6 is on!. If you appreciate our work, please consider donating. Even $1 can make the difference!
Follow us on Twitter. We post regular updates and info you may be interested in!


    1. Installation script:
    1. Prysm Dashboard:
submitted by diglos76 to ethereum [link] [comments]

FREE BINARY OPTIONS SOFTWARE - GET MONEY EVERY 5 MINUTES 5 Minutes Trades Binary Options Indicator Software Best 5 Minutes Binary Options Strategy 2020 - The BLW 5 ... $5 PROFIT EVERY 5 MINUTES - FREE BINARY OPTIONS SOFTWARE ★5 Minutes Trades Binary Options Indicator Software 5 Minutes Strategy Binary Options 2020 Step by Step - YouTube metatrader 5 binäre optionen, binäre optionen 5 minuten

The 5 minute binary options trading strategy is one of them. And with this specific 5 minute binary options strategy, your only mandatory requirements for success are that you-Currently have an existing trading account. Manage and maintain a patient trading plan. Familiarize yourself with all available charts provided by your broker. Binäre Optionen handeln & Strategien entwickeln: 5 Schritte bis zum ersten Trade. Der Handel mit binären Optionen hält zahlreiche Möglichkeiten bereit. Dennoch handelt es sich um ein hochspekulatives Finanzinstrument, bei dem es einiges zu beachten gilt. Da sich Trader nicht selten einer überwältigenden Informationsflut gegenübersehen, haben wir die wichtigsten Fakten und Schritte bis ... It provides a wide array of web-based and mobile features that help make binary trading transparent, reliable, and more efficient. For instance, it offers a speed trading service with up to 5-minute increments and hourly binary options in 15-minute increments. It also offers trading expiration periods of up to a month. Moreover, its Digital ... vfxAlert software specially developed for traders who like 5-minute binary trading tools and they are a newbie in trading markets or exchange markets.Those can generate potential income by using this vfxAlert software and they can also recover their capital from this software so we strongly recommended this software for binary traders and Beginners. 5 minutes binary options indicator can be your primary indicator with your own strategy. You can easily increase your profitability in the binary option using this 5 minutes expire indicator. And we a giving this indicator for free. If you like you can also check out 5 minutes trading strategy. This indicator is built based on the support, resistance and basement indicator. You can all of ... There are a lot of ways to trade the 5 minute binary options expiry. This time frame is one of the most versatile in terms of the types of strategies you can use because it is inherently volatile yet at the same time can sustain a trend long enough to be useful to us binary options traders. You can look at the bigger picture with 5 minute candles or you can drill down to 1 minute charts to see ... Binary Today 5 is the most effective signal software for traders looking to take advantage of 5 minute options and what they have to offer. Skip to content. Search for: Members Area; Contact Us; Join Now; Search for: Members Area; Contact Us; Join Now; Home admin 2020-09-02T19:49:02+00:00. Binary Today 5 Provides Guaranteed 81% ITM Trading Signals. Binary Today 5 is a binary options trading ...

[index] [22378] [13133] [4309] [13880] [16128] [10266] [14720] [4255] [29564] [4051]


$5 PROFIT EVERY 5 MINUTES - FREE BINARY OPTIOS SOFTWARE Conect with me Auto trader web binary auto trader binary auto trading free binary options software free binary robot ... I made this short video step by step with the 5 minute strategy and explaining how to follow its signals. FOREX & BINARY SIGNALS 💰💲FULL BEGINNER? Join My PERSONAL TRAINING!💴💵 BLW Trading Academy: 💲💹Official FREE Telegram Group: ... metatrader 5 binäre optionen, binäre optionen 5 minuten Der Bitcoin Macht Menschen ReichUnd du könntest dich weiterentwickeln zum... Would have been a 9-0 session in under an hour but got in late on a trade do to net/ server issue. Proven over months of testing consistent 30%+/ day average account balance gain doing just 5% ... Trading Profits of $760 in just 72 seconds! TOP SECRET Formula! Click Here Now! The Secrets to Automated Binary Success! Safe... FREE BINARY OPTIONS SOFTWARE - GET MONEY EVERY 5 MINUTES FREE DOWNLOAD APPS OR CONECT WITH ME