VMFS-6 read file error – Found stale lock

Once upon a time there was power cut and ESXi system went down.

… for these in rush, scroll down to “Fix” section…

After power-up none of details could be seen for VM which was up at the time of power cut. On VMs list page it was showing VMFS path to the file and that was it.

CLI level investigation shown that neither VM.vmx nor VM.vmdk files could be read and were raising error.

Fun…. fun… fun….

All below is against any good practice and/or anything Vmware would recommend as overwrites sensitive meta-data information of vmfs.
With that said, Vmware does not provide any way to fix it as of today and the only way would be to contact Vmware support.
All below is for educational purposes only and done at your own risk.

First steps

First steps were towards voma tool and output was similar to below (though below is taken from internet as no screenshots were taken at the time of the issue):

voma -m vmfs -f check -d <path_to_device>

returned output similar to below:


VOMA unfortunately does not support VMFS-6 in fix mode (as of 2018.11.02 on ESXi 6.5 and 6.7).

vmfs-tools (https://glandium.org/projects/vmfs-tools/) does not support VMFS-6 neither.

This left me in cul-de-sac… almost.

Big, big thanks to: Ulli aka continuum at communities.vmware.com helped to solve the issue.


Dump heartbeat section of your VMFS-6 in question (.vf.sf file is in root folder of VMFS-6)

Verify if the file contain only locks from your system (needs to be done on other system as strings is not available on ESXi, scp or any other way to get the file out of your system is your friend) :

If above is confirmed, generate a clean heartbeat section using same build of ESXi and dump it to file:

Transfer that clean file to your ESXi server and incorporate it into VMFS-6 with issues:

Within a minute or two earlier locked files should be accessible if not, try to reboot your ESXi.



Locked files with VMFS 6

Create a VMFS-Header-dump using an ESXi-Host in production


Backup IMAP account

The need behind this scenario is very simple – make sure I have up to date backup of my IMAP account as sometimes it does happen that access to the account might be lost for whatever reason.

Additional requirement is to have it “just working” without any activity from my side.

What didn’t work very well was Thunderbird synchronizing all emails for offline use. The reason behind was that structure of the account had moving parts – new subfolders appearing and being moved to other locations. With Thunderbird it did require manual selection of “synchronized” folders which was a daunting task.

The solution is to use mbsync, synchronizing from remote IMAP server to local folder which is then exposed via local dovecot for use with any IMAP client. In this particular situation Thunderbird is used, though is set to not cache any emails older than 1 day (minimum setting within Thunderbird).

How to set it up


In my particular situation, I’ve had to build isync on my own, as the repository contained old (v1.1) version vs currently available (v1.3).

To build isync following steps were used:

This built our needed isync_1.3.0-1_amd64.deb

File ~/.mbsyncrc was created with following

The aim was simple – dump and maintain copy if IMAP and follow any removals of emails, etc.

Folder itself is backed up incrementally with borg backup for history, just in case if someone would remove all content on remote side. This would propagate nicely removing all emails on local side and borg backup allows us to revert to previous stage.

Mbsync finishes its task once goes through all folders and does not maintain connection for updates. Therefore we need to run it periodically from cront

With this we will have periodically synchronized offline repository of emails (one way – to our local folder only).


Dovecot will serve our local folder via IMAP to any client. This is dictated by the fact that Thunderbird doesn’t play well with maildir folder and I couldn’t get it to discover new emails appearing in folders. I would have to have a dirty solution to remove “.msf” files as it forced TB to discover new files. Though problem did persist with discovering folder structure.

Dovecot installation and configuration is very simple:

Update /etc/dovecot/dovecot.conf :

Update /etc/dovecot/conf.d/10-mail.conf


Update /etc/dovecot/conf.d/20-imap.conf in section

New dovecot packages uses upstart, hence “start/stop/restart dovecot” commands need to be used to restart process instead of /etc/init.d/dovecot restart, etc.


Standard Thunderbird setup with just one minor modification.

Deselect in Server Settings -> Advanced -> Show only subscribed folders

For whatever reason the subfolders are not under Inbox. It has something to do with Dovecot configuration, but since this worked for me I didn’t waste more time to investigation. I didn’t need to subscribe to all folders as I was accessing my local offline copy of files already.

Just to be on safe side and avoid email duplication under Account -> Synchronization & Storage -> Keep messages for this account on this computer” has been unchecked.

ssh to old HP iLO

Connecting to HP running some older version, only used internally, one can run into situation where after upgrade of local client it is not possible anymore to connect to managed HP ILO system.

Message presented is:

A bit or research it came out that ssh has disabled some not so secure read weak combinations which resulted in that problem.

To workaround it, just follow recommendation from http://www.openssh.com/legacy.html

Below is copy paste from this site.

or in the ~/.ssh/config file:

Soon after another problem was revealed:

To which there’s solution too.

OpenSSH 7.0 and greater similarly disable the ssh-dss (DSA) public key algorithm. It too is weak and we recommend against its use. It can be re-enabled using the HostKeyAlgorithms configuration option:

or in the ~/.ssh/config file:

It’s also possible to query the configuration that ssh is actually using when attempting to connect to a specific host, by using the -G option:

Hope this helps.

Mint on Dell Precision 5520 – fan noise

The key was the patch:

@Credits go to someone as when downloading didn’t take a note and don’t have time to look for it.

The important part is that both smm from this package is required as well as module after each kernel has to be rebuilt (make) and installed in appropriate location

Files I’m using to disable (obligatory to have i8k running as otherwise can burn laptop). Content of /usr/local/sbin/fan-bios-disable

Content of /usr/local/sbin/fan-bios-enable

And /etc/i8kmon.con

cpufreqd might be a good addition to control governors and maximum frequency, i.e. force lowest during the night to avoid any fan.

Disadvantage is that at least brightness control doesn’t work.

Searching for more details here can be found:

XPS 9560 – Battery life optimization and fan management from Dell

Great help was found at https://github.com/vitorafsr/i8kutils/issues/6

Ubuntu/Mint/Debian btrfs compressed at installer time

Easiest way to do this is to alter the mount command of the live environment.

Boot as usual to the live session.

Move the mount executable to another location:

Edit a new file using sudoedit /bin/mount and save the following script into it (alter the options as you like; here we have added compress):


You can also match block devices like /dev/sda1 instead of -t btrfs and chain elifs to use different mount options for different devices and filesystems.

Copy the original permissions over to the new script:

Install as usual and your btrfs partition will be mounted with the specified options (here, compress).
After the installation is finished, before exiting the live envirement, alter the /etc/fstab of the newly installed system to match the specified options, so it will use the same options on new boots.

Additional options one might consider to add are:

One more seen online was realtime

@Credits to someone out there – not referenced to exact source as it comes from my notes.

NextCloud sharing

Following upgrade from OwnCloud version 8 through 9, 10 and to NextCloud v12 to big surprise file/folder sharing option disappeared.

Same issue has been described here:


Throughout investigation showed that problem was somehow related to apps folder.

See at the bottom for solution.

For tests purposes below steps were taken:

  1. File sharing app happen to be on original install. By enabling it, as long as sharing was enabled in admin tab, files/folder view did show empty page whilst shared with me, showed nice list of shared with me folders.
    Further tests by disabling sharing in Admin -> Sharing panel: “Allow apps to use the Share API” resulted in nice view of files, but no shared folders/files were present.
    NOT OK – couldn’t get it working. Cleaning up DB manually didn’t help.
  2. Configure fresh installation of NextCloud (freshly downloaded, fresh database).
    OK – everything worked. This confirmed that at least vanilla version should work fine.
  3. Original installation DB backup was restored to new DB.
    Semi OK – some manual steps were needed, to repair installation and re-enable applications. Surprise was that file sharing app was required upgrade and was disabled.

The option 3 was the most promising and further tests were done if everything works fine.

Another element which was discovered was that shared folder ended up in my user main folder which was very inconvenient.

/config/config.php had set it to 'share_folder' => '/Shared',

Further checks of DB content revealed that there are entries pointing to my root folder. Modifying DB directly solved that problem and setting 'share_folder' => '/Shared/', solved issue for newly shared folder.

See my other post on how to update existing table for sharing. Update SQL entries


Steps to get production system working were:

  1. Put your system into maintenance mode. In my case I’ve blocked access to NextCloud installation based on source IP, to avoid production users interacting with system during maintenance work.
  2. Backup your NextCloud folder and DB.
  3. Install fresh NextCloud and restore config.php
  4. Move data folder.
  5. Don’t move/copy anything from apps folder.
  6. Repair system: ./occ maintenance:repair
  7. Re-enable all apps, especially files_sharing (v1.4.0).
  8. Disable maintenance.
  9. Upon first launch system will ask to upgrade missing applications and perform additional tasks. NextCloud downloads missing apps and updates them.
  10. Review if all apps are back and re-enabled.
  11. All should work fine
  12. Disable IP based filtering to allow production users to use the system.


Hope above helps as errors received didn’t make any sense to me.

Kodi and ssl_bump Squid

UPDATE: 2017-11-25 Kodi changed modules and how Certs are checked (certifi and schism)

Friend of mine, happy user of freshly baked private DLP based on Squid and ssl_bump, have quickly realized that to update his add-ons he has to connect to avoid ssl_bump based proxy.

Throughout checking shown that Kodi uses Python libraries with local certificates and trusted Certificate Authorities (at least on Windows). Troubleshooting did lead to:

OLD: before 2017-11-25



All what was needed was to add his Root CA cert content at the end of the file (crt format):

Afterwards everything came back to normal.

Chaining Squid URL rewriters – custom URL rewriter chained with SquidGuard

Below describes how to get URL filtering based on SquidGuard and URL rewrite for quality optimization/video caching, etc. Article talks about basics and how to set up URL rewrite and later how to chain multiple URL rewriters,

Basic URL rewrite

Basic URL rewrite has been covered here https://blob.mypn.eu/get-the-resolution-right-squid-basic-url-rewrite-script/.

SquidGuard URL filtering

SquidGuard URL filtering, how to set it up and keep alive has been covered here https://blob.mypn.eu/squidguard-url-filtering/

Chain multiple URL rewriters

Squid, at least on v3.5 does allow to define only single  url_rewrite_program which causes set of implications.  The main disadvantage is that without use of external program it is impossible to chain multiple URL rewriters as needed in our case.

The aim is to have chained above URL bitrate rewrite as well as to use SquidGuard to filter unwanted content, i.e. advertisements (SquidGuard category adv / ads). The perfect solution would be to be able to use ACLs to i.e. direct video related domains for URL rewrite whilst the rest to SquidGuard for filtering.

Given today limitations the simplest way (read lazy) way is to use chaining script which will do the work for us. Checking online one will find http://adzapper.sourceforge.net/#download which provides two scripts: wrapzap and zapchain. These scripts were created by Cameron Simpson back in 2000/2001, so quite some time ago and are often referenced with multiple examples how to get them working.

wrapzap & zapchain

Wrapzap is used to set all environment variables however I can’t find any of them being required for our example. At the bottom of the script it calls the real zapchain with selected URL filters

Regardless of number of tries I could not get this working despite online reports suggesting that it should just work. Other tested option was to run zapchain directly from squid.conf with selected filters.

The main difference was probably due to the way URL rewriters were supposed to work vs nowadays. In the past it seems like rewriter would output just the new URL whilst modern implementation expects syntax:

This required additional parsing and rewrite of original zapchain script to deal with modified output.

This provided expected results and submitting output of one URL rewriter to another. My use case would rarely result in double modification of the output. The order in which rewriters are called represents the hit rate of the both. Adblocker implemented with use of SquidGuard gets much more traffic and will shorten all calls to ads effectively providing less load on the second one.

Get the resolution right – Squid basic URL rewrite script

Squid allows to use URL rewrite program to alter URL silently (rewrite) or preferred method to redirect to other URL. Mobile apps often rely on data retrieved from URL whilst at the same time not supporting re-directions (i.e. web TV/Movie platforms).

Int the simplest form the rewrite configuration could look like:

What it does it that for all calls for domains defined as part of rewrite_quality acl, in this case  .some.cdn.network.inexstent  it would pass the URL through the rewrite-script.pl.

Squid launches the defined script upon start (depending on number of children – url_rewrite_children ) and passes requests to script STDIN.

The full description of request as passed by Squid is:

Further described at http://wiki.squid-cache.org/Features/AddonHelpers. The example request passed to the script looks like:

Additional detals around url_rewrite_program  can be found at http://www.squid-cache.org/Doc/config/url_rewrite_program/

Custom URL rewrite script

Simple URL rewrite script could be as following to rewrite bitrate part of URL for TVN Player / Player (aka player.pl):

The script as above needs to be then pointed within squid.conf

Motivation for above was that for unknown reasons web and mobile players were behaving differently and very bad quality was selected on some players regardless of the available bandwidth. Above forced the proper quality and has certainly a lot of drawbacks due to silent URL rewrite as it forces other clients to the same selected quality. Note that above example sets very low quality for tests.

SquidGuard – URL filtering

SquidGuard is one of very well known URL filtering solutions. Paired together with some good URL/domain list is very powerful and fast solution.

SquidGuard installation is very simple and well described on internet.

Example squidGuard.conf  could look like:

By default config does not include the dest sections.

To generate one as no other list/script could be easily found, below was quickly written:

The minor problem with the script is that it generates incorrect lines generating error at SqudiGuard level for parent folders with subcategories. But you’ll need to run this script once only. Should one find better way to get it done, let me know pls.

One of well known, updated and free to use for private purpose is Shalla list.

Automated list update process could be described with below. Please note to not run it more often than every 24h as per request from Shalla guys as the list is not updated more often.

Script can be then linked to /etc/cron.daily  folder:

  1. http://terminal28.com/how-to-install-and-configure-squid-proxy-server-clamav-squidclamav-c-icap-server-debian-linux/
  2. https://calomel.org/squid_adservers.html
  3. http://www.kernel-panic.it/openbsd/proxy/proxy6.html
  4. https://help.ubuntu.com/community/SquidGuard
  5. https://www.cyberciti.biz/faq/squidguard-web-filter-block-websites/
  6. http://wiki.squid-cache.org/ConfigExamples/ContentAdaptation/C-ICAP
  7. http://dansguardian.org/
  8. http://thejimmahknows.com/network-adblocking-using-squid-squidguard-and-iptables/?doing_wp_cron=1492274530.4266140460968017578125
  9. https://forum.pfsense.org/index.php?topic=72528.0
  10. https://github.com/diladele/docker-websafety
  11. http://www.squidguard.org/Doc/extended.html
  12. http://www.tecmint.com/configure-squidguard-for-squid-proxy/
  13. http://adzapper.sourceforge.net/