Azure CLI – az login – behind SSL intercepting DLP/proxy

https

Should you be in similar situation to many of us behind corporate firewall intercepting SSL connections (i.e. DLP/Layer 7 inspection) trying to use Azure CLI (az) will raise following error:

# az login
Please ensure you have network connection. Error detail: HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url: /common/oauth2/devicecode?api-version=1.0 (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')],)",),))

Above will be shown even if system wide SSL CA Certificate used by intercepting proxy is installed on your system.

Normally to get such certificate installed for system wide use, following needs to be done:

  1. copy certificate to: /usr/local/share/ca-certificates/<your-ca>.crt
  2. run, i.e. on debian/ubuntu flavours: update-ca-certificates
  3. relogin

This though won’t work with Azure CLI as it uses Python related certificate chains ignoring the system wide.

The easiest workaround is to force Azure CLI to use the system wide SSL trusted certificate file:

# echo "export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt" >> ~/.bashrc

Once above is done – re-login and run az login – all should work fine.

Hope this helps.

OpenVPN & Office 365 / Windows 10 nightmare

"We are unable to connect right now. Please check your network and try again later."
-- your beloved vendor

Yep, we’ve all seen it and if you’re here reading it, it only means you’ve got enough of it and it is time to resolve Microsoft Windows Known Issue

OpenVPN and other VPN solutions allow to push all traffic through VPN gateway. All good so far, right? To do that default gateway is set.

Again, if you’re here it means that the problem you’ve run into with connectivity “issue” is not a real connectivity issue as you’ve certainly established that even if this message is shown, you’re able to browse internet using browser, icmp/traceroute shows traversing correct path via vpn gateway, etc.

So what the heck is going on here and why Office 365 is so stubborn with its claim?

The problem originates from the fact that Office 365 relies on Network Location Awarness (NLA) which uses Network Connection Status Indicator (NCSI) to decide if “connection” to internet works for it’s call home functions.

The inner works of NCSI is that it goes out and tries to get to following in order to verify “Internet connectivity” http://www.msftncsi.com/ncsi.txt. Some reports show also following as probed dns.msftncsi.com, 131.107.255.255.

None of that sounds scary, all reasonable right?

To make sure that traffic is going through VPN gateway, you’ve checked already traceroute (tracert).

The setting to have all traffic pushed via gateway on OpenVPN is:

push "redirect-gateway def1"

That’s where a lot of us were stuck, myself for a year, returning to it from time to time. At one point in time, I thought that M$ needs to check “licensing” and it is some sort of smartness/dumbness there that it does this as first thing whilst starting Windows, prior you get any control and sees its proper public address. To make it worse in troubleshooting I’ve noticed a lot of SYN followed by RST/RST,ACK when connected via VPN and when going out through clear connection it was nicely setting up tcp session. This in itself made me thinking that it is some sort of multi-layer communication, where first connection pull something and prompts Microsoft servers to accept following connections from given IP and as result, after connecting to VPN, I was getting RST and not able to connect.

This was until today – got upset with that problem and spent additional time on testing, redirecting traffic, resetting status, etc. This brought me to discovery of following.

Default gateway or not…?

Looking at ipconfig output and Network and Sharing Center, I’ve discovered that:

a) Network Sharing shows VPN/OpenVPN connection as “No Internet”

b) there is no default gateway on the output of ipconfig.

Hold on… no default gateway? How the heck my whole internet access via VPN (ok outside of Office 365 works then)? No, no proxy, no hijack, no nothing. Checking routing table – all looks ok’ish – as always, VPN routes, + 127.0.0.0 route, etc. Not so good metrics but everything else works.

What the heck? – Solution

This led me to small test – how about if I’d add default gateway manually or would not rely on OpenVPN “redirect-gateway def1” setting? Going ahead with:

push "route 0.0.0.0 0.0.0.0"

in ccd (client configuration definitions?) of OpenVPN sorted out the thing. As simple as that.

Side info, ccd file is named after host certificate as it connected to OpenVPN. Settings equally could be pushed out to all connecting clients, but that wasn’t my requirement for other reasons.

This helped to get “Internet” access under Network and Sharing Center and see default gateway on ipconfig output.

Victory!

References:

  • https://answers.microsoft.com/en-us/windows/forum/windows_10-networking/network-connection-status-indicator-ncsi-showing/02664ddf-4eac-449a-8318-bdae1a5bad3d
  • https://www.macwheeler.com/windows-10-office-365-cannot-connect-over-openvpn-fixed/ – found it after finding solution, though it shows enforcing it from client side using Windows UI.

Apple censorship – posts on LSIs (liquid indicators) removed by Apple

Dear,

There’s a lot of noise about Apple products and how Apple refuses to accept warranty repairs abusing LSIs.

Apple went one step further and censors and tries to mute all discussions about this subject.

Below is example of post they removed claiming is “inappropriate” for forum whilst was posted under thread where LSI was subject.

Original post below.

Hi (SecTec),

Thanks for participating in the Apple Support Communities.

We’ve removed your post macbook pro problem – “water damage”- Really? because it contained either product feedback or a feature request that was not constructive.

To read our terms and conditions for using the Communities site, see this page:  Apple Support Communities  – Terms of Use

We hope you’ll keep using our Support Communities. You can find more information about participating here:  Apple Support Communities  – Tutorials

If you have comments about any of our products, we welcome your feedback:  Apple – Feedback

We’ve included a copy of your original post below.

Thanks,
Apple Support Communities Staff


Original – removed post:

I’ll add to it a bit of my story, same as above which just highlights that there’s something wrong with MacBook Pro design and/or LSIs (liquid indicators).

 

The story started back in 2014, bought new MacBook Pro, best available model. Used it for professional use, traveling a lot by plane, fully aware of how to operate computer equipment as I used to assemble PCs in the past and am in industry for 20+ years. Worth noting that prior MacBook Pro, used all sorts of other vendors, Dell, Lenovo, HP – never had any problems with equipment and kept replacing it only when it was too old/slow.

 

Back to main story, the acquired MacBook was put into Speck enclosure to protect it from body damages, this was ca. September 2014.

I have also additional insurance covering any liquid spills, etc. so essentially I’m not bothered with that from expenses side, but more from the aspect that I’ve never spilled any liquid on any of my laptops. I’ve learnt my lesson with coffee over keyboard back in the past with desktop PCs, so very careful these days and never have any food/liquids around laptops.

 

Fast forward to January 2015, was traveling to Florida, this time for leisure. Rarely used system, however on the very last day was checking flight plan for the day when the SSD died. System did hang and upon a try to reboot reported missing drive.

Was delivered to Service center. Forced technician to start tests in front of me which didn’t make them happy. There were no complaints about any damages on the body. Had to leave system with them.

Up to my huge surprise couple of days later received a call that LSIs were triggered.

I’ve requested to see that and at that time discovered that:

a) usb port was damaged (sic!), technician was claiming that it was since the beginning,

b) seen LSI triggered (red).

 

What came to my attention was that only one LSI was triggered and none other. This did lead me to request technician how in the world this one could be triggered if none other was triggered.

This did lead him to seek Apple authorization for exceptional approval.

The SSD has been replaced in result.

 

In April 2014, system became slow (like really slow, think about PC with i396 CPU) and started to report that battery needs replacement, was powering off seconds after disconnecting charger.

 

System was then delivered to Apple store. System was inspected by technician and I’ve received call that system was exposed to water and that stops any further activities from their side.

It took some effort to collect documentation from previous case to present them that nothing did happen since last repair, no other LSIs were triggered and this technician also admitted that it is not the first time he has seen LSIs triggered which would be difficult to have triggered as single one and no others.

System was repaired, this time battery, meaning the whole lower body due to new battery specifics.

 

Moving forward to December 2015, the very same Mac failed again. Same issue as previously, meaning slow system, charging.

 

This meant for me couple of things,

a) previous slowness happen just before business trip, but as it was night before, I thought it was just a slow system and battery issue, I took Mac with me, only to discover that it was useless. This forced me to buy new system with similar specification as Mac, due to my profession I have to have systems with 16GB ram, SSD, etc. So, fun, fun… a lot of expenses.

b) this time it failed during business trip and guess what, I didn’t have the earlier acquired spare system with me. Due to importance of the trip, length and requirements, I’ve had to acquire yet another system (sic!). In total two spare systems, thank you Apple.

c) system was out of standard Apple warranty and due to above, I was sick with issues with this system, I’ve submitted claim to have system replaced .

 

Couple of days later, I’ve received call from technician asking questions about the issue, history, etc. Good, they started to work on it.

Up to my surprise there was no sign of life from their side for very long time, when they called back it was 20 days after reporting issue to them.

Without any surprise I’ve been told that there were LSIs in red and as result they deny any of my claims due to water, etc. This was certainly a nice excuse and try from their side, as these repairs are on Seller cost and not necessarily considered as “warranty” repair. I’m unsure how Apple plays with their cost centers, etc… it is outside of this story.

 

Their push back was so mean that it triggered me to not exercise my insurance

 

It took a nice additional days to get it executed.

In March 2016, I’ve had new MacBook Pro in hand.

 

I was so ****** by that time and so used to my new Lenovo, that I kept it using, happily… no issues.

MacBook was happily sleeping in unsealed, original Apple box.

 

For some reason I’ve decided to start using the system again around December, as the retina display provide nice colors comparing to Lenovo display.

 

So, was happy Mac user again.

 

Just until March 2017…. so it counts at max 3 months!!!

 

Mac failed again. This time it had close to zero trips, stayed at home. This time it failed in different way, was left at the desk, went to sleep (display off). The next day I’ve tried to wake it up, no reaction. Tried all combinations SMC reset, etc. Noticed that Mag plug was showing green indicator. Since MacBook are so great in communication with user about it’s state it was my only indicator to see if it reacts on anything. Unplugged it, plugged back and zero, nothing, no lit… just gray… no orange, no green.

Ok… now it is dead like never again. Called Apple, they pushed me through standard, check connector, etc…

 

Went to Service Center again, technician happily noted, no scratches, nothing, mint condition. Given past experience I’ve pushed them to sign that no physical damages were there. Since I was in rush, it was late and I’ve had to prepare for trip next day, couldn’t wait or go to Service Center when they would be able to open system next to me.

 

The day after, I’ve received mail from Service Center. 800$ for Motherboard, 1000$ for 512GB SSD + some peanuts for fans.

As per Apple Technician, one LSI is on and up to my bigger surprise they claim there were traces of some liquid inside!!! I can honestly say that this system didn’t have any contact with any liquid. I’ve been super duper careful with it, much more with any other system earlier, due to earlier experience.

Yet, Apple once again claims system had contact with liquid.

 

I’m sorry Apple, it is either a scam or something is wrong with design and/or LSIs. What I can say, there was no liquid spilled. The conditions in which system was used, was always in-house, office, hotel, living room, lounge at airport, plane. No, no bathroom, no kitchen, no exteriors, nothing. Always dry air, no temperature shock, system was always in sleeve and then in travel rolling bag (very well protected from everything).

Leaving system at Apple didn’t think they could say anything about LSI, especially this time when I took care of the system that much.

 

Interesting in the whole story is that at the same time, the Lenovo system I’m typing from now, was used for much more time than the last MacBook. Earlier I’ve used the good old ThinkPad W530, etc. never had an issue.

 

What did I like about MacBook pro?… that was hardware design as it nice, lightweight and small comparing to T530 tank.

As this MacBook failed again, am I going to stay with Apple? Rather not…

I’ll try to understand what LSIs were triggered and will see what Apple has to say, but for the sake of time, this time I’ll probably exercise the insurance unless Apple will be helpful.

Depending on that I’ll either walk away from Apple or might stay… not really sure.

 

This will have some, potentially minor impact on Apple sales, but some will be there, I’ll advise internally within company to stop purchasing Apple, so we talk about potentially another 200-300 units which would be otherwise purchased within next 3 years or so.

I can’t recommend to any business now to use Apple.

What could change that is Apple admitting some design issues and take it more seriously.


This is a send-only account. Replies received at this address are automatically deleted.


TM and copyright © 2017 Apple Inc. 1 Infinite Loop, MS 96-DM. Cupertino, CA 95014.

OpenVPN sudo and pam failure

Problem comes from systemd new setting on 17.04+ (experienced on 18.04):

Example:

The fix is to run:

# paste

Write changes. This creates file:

Now we need to reload:

 

Fix has been mentioned:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=792653#25 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=792653#45

 

 

 

SATA passthrough on ESXi 6.5 & 6.7

lspci -v|grep -i Mass -A1
0000:00:17.0 SATA controller Mass storage controller: Intel Corporation Sunrise Point-LP AHCI Controller [vmhba0]
Class 0106: 8086:9d03

vi /etc/vmware/passthrough.map

8086 9d03 d3d0 false

reboot

Works on ESXi 6.5, didn’t work on ESXi 6.7 (fresh install in both cases).

Update SQL entries

Working on some other elements, I’ve run into need to update large number of elements within SQL table.

Below are examples how can it be done effectively.

 

 

Sensors – adjust high & crit values – ACPI

DISCLAIMER:

Below might be incorrect as causes all temperatures (including current) to be calculated with -10C degrees.

 

With default lm-sensors configuration I’ve experienced shutdowns from time to time. Looking at details it all seemed like lm-sensors has been instructed by BIOS with very too high “high” and “crit” values.

Before:

acpitz-virtual-0
Adapter: Virtual device
temp1:        +70.5 C  (crit = +126.0 C)

coretemp-isa-0000
Adapter: ISA adapter
Core 0:       +70.0 C  (high = +100.0 C, crit = +100.0 C)
Core 1:       +70.0 C  (high = +100.0 C, crit = +100.0 C)

After:

acpitz-virtual-0
Adapter: Virtual device
temp1:        +70.5 C  (crit = +126.0 C)

coretemp-isa-0000
Adapter: ISA adapter
Core 0:       +60.0 C  (high = +90.0 C, crit = +90.0 C)
Core 1:       +60.0 C  (high = +90.0 C, crit = +90.0 C)

How?

Check which sensors are reported and how:

Based on above create file, i.e.

with:

For some reason on my Ubuntu 14.04 system based on Dell D620 Critical value hasn’t been updated properly and took high value instead.

p.s. man sensors.conf suggests to use “set” statement, but it didn’t work for me.

Here is example of what didn’t work:

 

Network folder synced to OneDrive/SharePoint

SharePoint synchronization mechanism using in background groove.exe does a lot to block synchronization to a network folder.

Whilst this might be well explained by why would one sync a network location such as SharePoint to another network folder, there are situations where it is required. One of such requirements is when system is running in VM and the data needs to be synchronized to VM shared folder which is then visible to operating system as network drive, i.e. then available to different VMs to avoid waste of space if that folder would be put on “C:” drive.

The workaround which works pretty well is to us symbolic link. Before doing so, make sure to close any instance of groove.exe or any other software using data in the synchronized folder.

Should you have your folders synchronized already the steps are:

  1. Stop all groove.exe processes.
  2. Rename existing folder, in example below <Company Name Team>. If OneDrive folder was selected to be c:\OneDrive it will be c:\OneDrive\<Company Name Team> and there will be another personal OneDrive folder.
    If this worked fine it means that all programs were closed properly, otherwise error would be raised.
  3. Open CLI with Administrator privileges
  4. Re-run OneDrive and/or Groove to verify that all has been recognized properly.

This should be as simple as that.

Important: synchronization of private folder on SharePoint requires newer OneDrive version than the one for shared folders and this one has additional check which does not seem to accept above trick.

Log all request details on Squid

Sometimes it is required to log all request details on Squid, i.e. when you need to figure out details to write your own URL rewriter to optimize videocache or other statistics

By defailt squid strips details after “?” and all what we need is to tern it of.

strip_query_terms off

More details at http://www.squid-cache.org/Doc/config/strip_query_terms/

 

Thumbs on tunnels – terminate tunnel and check content – DLP

The past was easy peasy, not the case anymore these days. HTTPS was rarely used and only where it really needed to be. Anyone on the wire could see what’s going on. And yeah, it meant literally everyone.

Was it good? No, not at all. Everyone could track anyone, collect information, behavior, etc. Result? Everyone started to move towards HTTPS (ssl then tls). This basically means couple of good and bad outcomes. Squid cache ration went down, and I mean very, very low these days. Most connections handled by Squid these days are tunnels (CONNECT) and anything can be sent through such tunnel, not only private data, credit card numbers, etc. but also ads and virus (sic!).

This created specific need to be able to check the content of the stream, especially in enterprise environment. All the DLP systems can only work on streams if they can check the payload, decrypted traffic which means they need to be so well known man-in-the-middle. This means to be tunnel end-point from client side and initiation of new tunnel to server. Only this allows to inspect the traffic.

At the same time one could ask, hey how about certificates? The long winded answer can be found elsewhere, but short answer is that in enterprise environment there is at least one Root Certificate Authority (CA) and Intermediate Certificate Authority can be created. The root or intermediate CA (called CA afterwards) together with key file is uploaded to the DLP system allowing it to generate new certificates on the fly for terminated tunnels.

The CA is already trusted in enterprise environment as Root CA certificate is added to Trusted CA key ring on client host as part of operating system deployment package or domain connection.

Since client trusts Root CA it automatically trusts certificates signed by the given CA – this is how Public Key Infrastructure works. Additionally the DLP system usually tries to mimic all original certificate parameters and only CA related details are different. But people rarely check details of certificate if everything is in green and no popup/error is raised.

private DLP

With all above being said one can have their own DLP system based on Squid. This “little piece of software” is great in handling huge loads, caching data and calling others for help. This is what we’re going to do today.

Squid to terminate SSL connection uses ssl_bump functionality. The Ubunty 16.04 LTS default package does not allow to use this great functionality hence we need to start with little configuration.

Let’s get sources and all needed libraries (for all below any sudo call is skipped as it just makes the output longer and you, dear reader certainly know how to use sudo).

then we need to apply patch to enable ssl as needed

To apply patch, use your standard methodology patch -p<level> < diff-file.patch

Afterward squid package and any missing packages need to be installed

Certificate generation

First of all we need to have folder where we will store it all

 

The certificates mea:

Depending on client system certificate import will look differently. The good idea might be to place it on some easily accessible server, i.e. local wpad system.

Once certificate is downloaded it should be installed. On windows this can be done with:

Some apps do not respect Operating System level certificates, but most will. Some apps might need to be restarted it shouldn’t be required to restart full system, but who knows what type of ancient system one can be using?

All the new certificates generated on the fly need to be stored somewhere. The folder needs to be created:

And update squid.conf  (not all SSL cert ERROR related flags should be as below, tune it up to your needs).

Create set of files under /etc/squid/ssl_bump:

  1. sslBumpnet – subnets/hosts which we will bump, can be selective if needed
  2. sslnoBumpnet – these subnets/hosts we won’t bump (see logical construction above and tune it to your needs
  3. sslnoBumpnetdst – these domains/servers we won’t bump
  4. sslnoBumpSites

Example sslnoBumpnetdst as it might be tricky to set it up at the beginning. Some apps have built-in certificates and verify connection against them, i.e. Google Play store and couple of others.

Interesting was to find out what banks are doing with data and identify as in one case at least that for stats reasons were sending out GET request with bank account details, including saldo and transaction details to on-line stats agency. This was against any bank standards and bank should be prosecuted due to this. This was only possible after terminating tunnels as otherwise GET wasn’t visible. This can only be acceptable for tests and for your own private use at home in the isolated lab (as always).

Below list was fine tuned based on tests and was longer earlier.

The last point is to restart squid

To test it, select port 3127 as proxy. Once all tested transparent interception (don’t forget to have CA trusted on client side).

 

For non-Android based systems proxy.pac/wpad.dat file could be created

To get that working the wpad.yourdomain and wpad host needs to be resolved to your web server and thw wpad.dat/proxy.pac file needs to be in root folder.

Android platform is prone to not process this file (at least as for April 2017 and manual proxy settings in WiFi section need to be set.

This should all work… but hey, problems are expected.

  1. The usual problem is that CA is not trusted.
  2. Client/app has it’s own CA list and does not trust operating system level list. This required adding CA to this trusted app ring.
  3. Perl/Python based apps might have their local ssl and root cert trusted ring, see: Kodi and ssl_bump Squid.
  4. To troubleshoot full logging is often useful, see Log all request details on Squid

 

Links and reads

  1. Diladele non-free:
    https://docs.diladele.com/administrator_guide_4_0/installation_and_removal/install_on_ubuntu.html
    https://www.diladele.com/solution.html
  2. ClamAV & SquidClamAV
    http://terminal28.com/how-to-install-and-configure-squid-proxy-server-clamav-squidclamav-c-icap-server-debian-linux/
    http://squidclamav.darold.net/install.html
  3. http://wiki.squid-cache.org/ConfigExamples/Intercept/SslBumpExplicit
  4. http://wiki.squid-cache.org/Features/SslPeekAndSplice
  5. http://www.squid-cache.org/Doc/config/ssl_bump/
  6. http://wiki.squid-cache.org/Features/DynamicSslCert
  7. http://ubuntuserverguide.com/2013/12/how-to-filter-https-traffic-with-squid-3-on-ubuntu-server-13-10.htmlhttps://forums.kali.org/showthread.php?23036-SSL-Interception-with-Squid3-(MITM)
  8. http://marek.helion.pl/install/squid.html
  9. http://thejimmahknows.com/squid-3-1-caching-proxy-with-ssl/
  10. http://www.squid-cache.org/Doc/config/acl/https://docs.diladele.com/administrator_guide_4_0/system_configuration/https_filtering/recompile_squid.htmlhttps://smoothnet.org/squid-v3-5-proxy-with-ssl-bump/