NextCloud sharing

Following upgrade from OwnCloud version 8 through 9, 10 and to NextCloud v12 to big surprise file/folder sharing option disappeared.

Same issue has been described here:

Throughout investigation showed that problem was somehow related to apps folder.

See at the bottom for solution.

For tests purposes below steps were taken:

  1. File sharing app happen to be on original install. By enabling it, as long as sharing was enabled in admin tab, files/folder view did show empty page whilst shared with me, showed nice list of shared with me folders.
    Further tests by disabling sharing in Admin -> Sharing panel: “Allow apps to use the Share API” resulted in nice view of files, but no shared folders/files were present.
    NOT OK – couldn’t get it working. Cleaning up DB manually didn’t help.
  2. Configure fresh installation of NextCloud (freshly downloaded, fresh database).
    OK – everything worked. This confirmed that at least vanilla version should work fine.
  3. Original installation DB backup was restored to new DB.
    Semi OK – some manual steps were needed, to repair installation and re-enable applications. Surprise was that file sharing app was required upgrade and was disabled.

The option 3 was the most promising and further tests were done if everything works fine.

Another element which was discovered was that shared folder ended up in my user main folder which was very inconvenient.

/config/config.php had set it to 'share_folder' => '/Shared',

Further checks of DB content revealed that there are entries pointing to my root folder. Modifying DB directly solved that problem and setting 'share_folder' => '/Shared/', solved issue for newly shared folder.

See my other post on how to update existing table for sharing. Update SQL entries


Steps to get production system working were:

  1. Put your system into maintenance mode. In my case I’ve blocked access to NextCloud installation based on source IP, to avoid production users interacting with system during maintenance work.
  2. Backup your NextCloud folder and DB.
  3. Install fresh NextCloud and restore config.php
  4. Move data folder.
  5. Don’t move/copy anything from apps folder.
  6. Repair system: ./occ maintenance:repair
  7. Re-enable all apps, especially files_sharing (v1.4.0).
  8. Disable maintenance.
  9. Upon first launch system will ask to upgrade missing applications and perform additional tasks. NextCloud downloads missing apps and updates them.
  10. Review if all apps are back and re-enabled.
  11. All should work fine
  12. Disable IP based filtering to allow production users to use the system.


Hope above helps as errors received didn’t make any sense to me.

Update SQL entries

Working on some other elements, I’ve run into need to update large number of elements within SQL table.

Below are examples how can it be done effectively.



Sensors – adjust high & crit values – ACPI


Below might be incorrect as causes all temperatures (including current) to be calculated with -10C degrees.


With default lm-sensors configuration I’ve experienced shutdowns from time to time. Looking at details it all seemed like lm-sensors has been instructed by BIOS with very too high “high” and “crit” values.


Adapter: Virtual device
temp1:        +70.5 C  (crit = +126.0 C)

Adapter: ISA adapter
Core 0:       +70.0 C  (high = +100.0 C, crit = +100.0 C)
Core 1:       +70.0 C  (high = +100.0 C, crit = +100.0 C)


Adapter: Virtual device
temp1:        +70.5 C  (crit = +126.0 C)

Adapter: ISA adapter
Core 0:       +60.0 C  (high = +90.0 C, crit = +90.0 C)
Core 1:       +60.0 C  (high = +90.0 C, crit = +90.0 C)


Check which sensors are reported and how:

Based on above create file, i.e.


For some reason on my Ubuntu 14.04 system based on Dell D620 Critical value hasn’t been updated properly and took high value instead.

p.s. man sensors.conf suggests to use “set” statement, but it didn’t work for me.

Here is example of what didn’t work:


Network folder synced to OneDrive/SharePoint

SharePoint synchronization mechanism using in background groove.exe does a lot to block synchronization to a network folder.

Whilst this might be well explained by why would one sync a network location such as SharePoint to another network folder, there are situations where it is required. One of such requirements is when system is running in VM and the data needs to be synchronized to VM shared folder which is then visible to operating system as network drive, i.e. then available to different VMs to avoid waste of space if that folder would be put on “C:” drive.

The workaround which works pretty well is to us symbolic link. Before doing so, make sure to close any instance of groove.exe or any other software using data in the synchronized folder.

Should you have your folders synchronized already the steps are:

  1. Stop all groove.exe processes.
  2. Rename existing folder, in example below <Company Name Team>. If OneDrive folder was selected to be c:\OneDrive it will be c:\OneDrive\<Company Name Team> and there will be another personal OneDrive folder.
    If this worked fine it means that all programs were closed properly, otherwise error would be raised.
  3. Open CLI with Administrator privileges
  4. Re-run OneDrive and/or Groove to verify that all has been recognized properly.

This should be as simple as that.

Important: synchronization of private folder on SharePoint requires newer OneDrive version than the one for shared folders and this one has additional check which does not seem to accept above trick.

Kodi and ssl_bump Squid

UPDATE: 2017-11-25 Kodi changed modules and how Certs are checked (certifi and schism)

Friend of mine, happy user of freshly baked private DLP based on Squid and ssl_bump, have quickly realized that to update his add-ons he has to connect to avoid ssl_bump based proxy.

Throughout checking shown that Kodi uses Python libraries with local certificates and trusted Certificate Authorities (at least on Windows). Troubleshooting did lead to:

OLD: before 2017-11-25



All what was needed was to add his Root CA cert content at the end of the file (crt format):

Afterwards everything came back to normal.

Log all request details on Squid

Sometimes it is required to log all request details on Squid, i.e. when you need to figure out details to write your own URL rewriter to optimize videocache or other statistics

By defailt squid strips details after “?” and all what we need is to tern it of.

strip_query_terms off

More details at


Chaining Squid URL rewriters – custom URL rewriter chained with SquidGuard

Below describes how to get URL filtering based on SquidGuard and URL rewrite for quality optimization/video caching, etc. Article talks about basics and how to set up URL rewrite and later how to chain multiple URL rewriters,

Basic URL rewrite

Basic URL rewrite has been covered here

SquidGuard URL filtering

SquidGuard URL filtering, how to set it up and keep alive has been covered here

Chain multiple URL rewriters

Squid, at least on v3.5 does allow to define only single  url_rewrite_program which causes set of implications.  The main disadvantage is that without use of external program it is impossible to chain multiple URL rewriters as needed in our case.

The aim is to have chained above URL bitrate rewrite as well as to use SquidGuard to filter unwanted content, i.e. advertisements (SquidGuard category adv / ads). The perfect solution would be to be able to use ACLs to i.e. direct video related domains for URL rewrite whilst the rest to SquidGuard for filtering.

Given today limitations the simplest way (read lazy) way is to use chaining script which will do the work for us. Checking online one will find which provides two scripts: wrapzap and zapchain. These scripts were created by Cameron Simpson back in 2000/2001, so quite some time ago and are often referenced with multiple examples how to get them working.

wrapzap & zapchain

Wrapzap is used to set all environment variables however I can’t find any of them being required for our example. At the bottom of the script it calls the real zapchain with selected URL filters

Regardless of number of tries I could not get this working despite online reports suggesting that it should just work. Other tested option was to run zapchain directly from squid.conf with selected filters.

The main difference was probably due to the way URL rewriters were supposed to work vs nowadays. In the past it seems like rewriter would output just the new URL whilst modern implementation expects syntax:

This required additional parsing and rewrite of original zapchain script to deal with modified output.

This provided expected results and submitting output of one URL rewriter to another. My use case would rarely result in double modification of the output. The order in which rewriters are called represents the hit rate of the both. Adblocker implemented with use of SquidGuard gets much more traffic and will shorten all calls to ads effectively providing less load on the second one.

Get the resolution right – Squid basic URL rewrite script

Squid allows to use URL rewrite program to alter URL silently (rewrite) or preferred method to redirect to other URL. Mobile apps often rely on data retrieved from URL whilst at the same time not supporting re-directions (i.e. web TV/Movie platforms).

Int the simplest form the rewrite configuration could look like:

What it does it that for all calls for domains defined as part of rewrite_quality acl, in this case  it would pass the URL through the

Squid launches the defined script upon start (depending on number of children – url_rewrite_children ) and passes requests to script STDIN.

The full description of request as passed by Squid is:

Further described at The example request passed to the script looks like:

Additional detals around url_rewrite_program  can be found at

Custom URL rewrite script

Simple URL rewrite script could be as following to rewrite bitrate part of URL for TVN Player / Player (aka

The script as above needs to be then pointed within squid.conf

Motivation for above was that for unknown reasons web and mobile players were behaving differently and very bad quality was selected on some players regardless of the available bandwidth. Above forced the proper quality and has certainly a lot of drawbacks due to silent URL rewrite as it forces other clients to the same selected quality. Note that above example sets very low quality for tests.

SquidGuard – URL filtering

SquidGuard is one of very well known URL filtering solutions. Paired together with some good URL/domain list is very powerful and fast solution.

SquidGuard installation is very simple and well described on internet.

Example squidGuard.conf  could look like:

By default config does not include the dest sections.

To generate one as no other list/script could be easily found, below was quickly written:

The minor problem with the script is that it generates incorrect lines generating error at SqudiGuard level for parent folders with subcategories. But you’ll need to run this script once only. Should one find better way to get it done, let me know pls.

One of well known, updated and free to use for private purpose is Shalla list.

Automated list update process could be described with below. Please note to not run it more often than every 24h as per request from Shalla guys as the list is not updated more often.

Script can be then linked to /etc/cron.daily  folder:


Thumbs on tunnels – terminate tunnel and check content – DLP

The past was easy peasy, not the case anymore these days. HTTPS was rarely used and only where it really needed to be. Anyone on the wire could see what’s going on. And yeah, it meant literally everyone.

Was it good? No, not at all. Everyone could track anyone, collect information, behavior, etc. Result? Everyone started to move towards HTTPS (ssl then tls). This basically means couple of good and bad outcomes. Squid cache ration went down, and I mean very, very low these days. Most connections handled by Squid these days are tunnels (CONNECT) and anything can be sent through such tunnel, not only private data, credit card numbers, etc. but also ads and virus (sic!).

This created specific need to be able to check the content of the stream, especially in enterprise environment. All the DLP systems can only work on streams if they can check the payload, decrypted traffic which means they need to be so well known man-in-the-middle. This means to be tunnel end-point from client side and initiation of new tunnel to server. Only this allows to inspect the traffic.

At the same time one could ask, hey how about certificates? The long winded answer can be found elsewhere, but short answer is that in enterprise environment there is at least one Root Certificate Authority (CA) and Intermediate Certificate Authority can be created. The root or intermediate CA (called CA afterwards) together with key file is uploaded to the DLP system allowing it to generate new certificates on the fly for terminated tunnels.

The CA is already trusted in enterprise environment as Root CA certificate is added to Trusted CA key ring on client host as part of operating system deployment package or domain connection.

Since client trusts Root CA it automatically trusts certificates signed by the given CA – this is how Public Key Infrastructure works. Additionally the DLP system usually tries to mimic all original certificate parameters and only CA related details are different. But people rarely check details of certificate if everything is in green and no popup/error is raised.

private DLP

With all above being said one can have their own DLP system based on Squid. This “little piece of software” is great in handling huge loads, caching data and calling others for help. This is what we’re going to do today.

Squid to terminate SSL connection uses ssl_bump functionality. The Ubunty 16.04 LTS default package does not allow to use this great functionality hence we need to start with little configuration.

Let’s get sources and all needed libraries (for all below any sudo call is skipped as it just makes the output longer and you, dear reader certainly know how to use sudo).

then we need to apply patch to enable ssl as needed

To apply patch, use your standard methodology patch -p<level> < diff-file.patch

Afterward squid package and any missing packages need to be installed

Certificate generation

First of all we need to have folder where we will store it all


The certificates mea:

Depending on client system certificate import will look differently. The good idea might be to place it on some easily accessible server, i.e. local wpad system.

Once certificate is downloaded it should be installed. On windows this can be done with:

Some apps do not respect Operating System level certificates, but most will. Some apps might need to be restarted it shouldn’t be required to restart full system, but who knows what type of ancient system one can be using?

All the new certificates generated on the fly need to be stored somewhere. The folder needs to be created:

And update squid.conf  (not all SSL cert ERROR related flags should be as below, tune it up to your needs).

Create set of files under /etc/squid/ssl_bump:

  1. sslBumpnet – subnets/hosts which we will bump, can be selective if needed
  2. sslnoBumpnet – these subnets/hosts we won’t bump (see logical construction above and tune it to your needs
  3. sslnoBumpnetdst – these domains/servers we won’t bump
  4. sslnoBumpSites

Example sslnoBumpnetdst as it might be tricky to set it up at the beginning. Some apps have built-in certificates and verify connection against them, i.e. Google Play store and couple of others.

Interesting was to find out what banks are doing with data and identify as in one case at least that for stats reasons were sending out GET request with bank account details, including saldo and transaction details to on-line stats agency. This was against any bank standards and bank should be prosecuted due to this. This was only possible after terminating tunnels as otherwise GET wasn’t visible. This can only be acceptable for tests and for your own private use at home in the isolated lab (as always).

Below list was fine tuned based on tests and was longer earlier.

The last point is to restart squid

To test it, select port 3127 as proxy. Once all tested transparent interception (don’t forget to have CA trusted on client side).


For non-Android based systems proxy.pac/wpad.dat file could be created

To get that working the wpad.yourdomain and wpad host needs to be resolved to your web server and thw wpad.dat/proxy.pac file needs to be in root folder.

Android platform is prone to not process this file (at least as for April 2017 and manual proxy settings in WiFi section need to be set.

This should all work… but hey, problems are expected.

  1. The usual problem is that CA is not trusted.
  2. Client/app has it’s own CA list and does not trust operating system level list. This required adding CA to this trusted app ring.
  3. Perl/Python based apps might have their local ssl and root cert trusted ring, see: Kodi and ssl_bump Squid.
  4. To troubleshoot full logging is often useful, see Log all request details on Squid


Links and reads

  1. Diladele non-free:
  2. ClamAV & SquidClamAV