Writing a simple CBT (changed block tracking) driver for Linux

Here is a simple Linux driver that can track all IO requests going to any particular block device. This might be useful when you want to write a simple CBT (changed block tracking) driver for Linux.

The key here is to get block device queue, then replace the request function with your own

q = bdev_get_queue(bdev); 
orig = q->make_request_fn; 
q->make_request_fn = requestfunc;      

in your own request function you can print all struct bio information, using simple function like this:
static void printbio(struct bio *bio, char *pref) {

log ("%s bio=%p, dev=%x, sector=%lu, bi_flags=%lx" " bi_rw=%lx bi_size=%d bi_vcnt=%d bi_io_vec=%p" " bi_max_vecs=%d\n", pref, bio, bio->bi_bdev ? bio->bi_bdev->bd_dev : -1, bio->bi_iter.bi_sector, bio->bi_flags, bio->bi_rw, bio->bi_iter.bi_size, bio->bi_vcnt, bio->bi_io_vec, bio->bi_max_vecs);


Using this technique, you can also create simple copy on write snapshot feature for the block devices that are not part of the LVM.
You can intercept IO request, save that BIO in memory (postponing it), then you create and submit new BIO to read original data from disk and only then you allow the first BIO to go through - simple COW snapshots.

How to export all emails from Salesforce

Salesforce might be a great CRM, but when it comes to its support ticket system - it is not very useful to say the least, especially when you need to quickly understand what has been going on within the support case, and check what troubleshooting steps have been already taken.

Here is the simple python script that fetches all emails for a case number and saves them in the local folder. All emails can be combined into one file, or they can be saved as separate files.
Files and emails are organized and numbered.
This way you can quickly read the whole email correspondence.

# coding: utf-8

    Purpose: fetches all emails for a case number specified at the cmd prompt. All emails can be combined into one file,
    or they can be saves as separate files.

    Notes on usage: make sure to change `login` and `password` variables down below. Switching `single_file_mode` to
    False will make the script produce output in its original way: 1 file per email.

import os
import getpass
import beatbox
from bs4 import BeautifulSoup

__author__ = 'Rustam Kovhaev'

# change it to your credentials, please note that you should append password with your SF security token
login = "r*****@veeam.com"
token = "your token"

single_file_mode = True

_start_comment_str = '<!--'
_end_comment_str = '-->'
_boundary_strs = [
    'From: Veeam Support',
    'From: "Veeam Support',
    '----- Original Message -----',
    '-----Original Message-----'
_email_tpl = u"""
    <div style="background:red;font-weight:bold;padding:0.5em 0em 0.5em 1em" onclick="var e=document.getElementById('{2}');if(e.style.display=='block'){{e.style.display='none'}}else{{e.style.display='block'}}">Show/hide quoted emails</div>
    <div id="{2}" style="display: none">

def main():
    # get emails from SF
    case_number = raw_input('\nPlease input case number: ')
    password = getpass.getpass('Password: ')
    password = password + token
    svc = beatbox.PythonClient()
    svc.login(login, password)

    qr = svc.query("SELECT Id FROM Case WHERE CaseNumber='" + case_number + "'")
    records = qr['records']
    print "Case ID: " + records[0].Id + "\n\n"

    qr = svc.query("SELECT Id,MessageDate,FromAddress,Subject,HtmlBody,TextBody FROM EmailMessage WHERE ParentId='" + records[0].Id + "'")
    records = qr['records']
    total_records = qr['size']
    query_locator = qr['queryLocator']

    while not qr['done'] and len(records) < total_records:
        qr = self._service.queryMore(query_locator)
        query_locator = qr['queryLocator']
        records += qr['records']

    records.sort(key=lambda x: x.MessageDate, reverse=False)

    # save emails into 1 or many files
    path = case_number.strip('\n')
    output_dir = os.path.join(os.getcwd(), path)

    if single_file_mode:
        file_path = os.path.join(output_dir, path + '.html')
        with open(file_path, 'w') as f:
            for index, email in enumerate(records):
                header = ' - '.join((str(index).zfill(3), email.MessageDate.strftime('%d.%m'), email.FromAddress))
                if not email.HtmlBody:
                    new_body = process_text_body(email.TextBody.decode('utf-8', 'ignore'), index, header)
                    new_body = process_html_body(email.HtmlBody.decode('utf-8', 'ignore'), index, header)
        for index, email in enumerate(records):
            file_name = '-'.join((str(index).zfill(3), email.MessageDate.strftime('%d.%m'), email.FromAddress, '.html'))
            file_path = os.path.join(output_dir, file_name)
            with open(file_path, 'w') as f:
                if not email.HtmlBody:
                    nstr = email.TextBody.replace('\n', '\n <br />')

def process_text_body(body, index, header):
    # to squeeze more data into 1 page, hide quoted emails
    boundaries = []
    for boundary_str in _boundary_strs:
        boundary = body.find(boundary_str)
        if boundary > -1:
    if boundaries:
        quoted_emails_start = min(boundaries)
        quoted_emails = body[quoted_emails_start:]
        email = body[:quoted_emails_start]
        quoted_emails = ''
        email = body

    # format email template
    return _email_tpl.format(header, email, u'subemail' + str(index), quoted_emails)

def process_html_body(body, index, header):
    # remove html garbage
    text = BeautifulSoup(body, 'lxml').get_text()

    # remove leading comment if any
    start_comment_pos = text.find(_start_comment_str)
    end_comment_pos = text.find(_end_comment_str, start_comment_pos)
    if end_comment_pos != -1:
        text = text[end_comment_pos + len(_end_comment_str):]

    # process as text
    return process_text_body(text, index, header)

if __name__ == '__main__':

new version(v3) of ppp patch implementing PEAP-MS-CHAP v2

Yesterday our admins updated our RRAS server, and it looks like PEAP protocol slightly changed.

Here is the new patch for PPP 2.4.7 that allows you to connect to MS RRAS via PEAP VPN

First of all some changes should be done on RRAS server, you need to configure the EAP Payload Size and set MTU to 1344. EAP doesn't support fragmentation per RFC but Microsoft implemented EAP fragmentation, however PPP daemon doesn't support it, and will discard packets that are large than MRU MTU negotiated during LCP


Patch itself: https://drive.google.com/file/d/0B3W_Sd07L80mN1BVWlFtejJGaGs/view?usp=sharing
Also you will need gnutls-dev package

patch modifies default setup location, from /usr/local to /usr so make sure you run a proper pppd

here is some part of config for peer: 
remotename login@domain.com 

 and chap file 
# client server secret IP addresses
login@domain.com * password * 

 if you specify a login in different manner(without @) you'll get a segmentation fault

ppp patch implementing PEAP-MS-CHAP v2

Here is the patch that implements PEAP(type 25) support to ppp linux daemon
Tested with 2008R2 Microsoft RAS server
Applied to 2.4.5 ppp
Thanks to wpa_supplicant(tls impl.) and wireshark creators(tls disect), some part of code was taken from there

patch itself:

patch modifies default setup location, from /usr/local to /usr
so make sure you run a proper pppd

here is some part of config for peer:
name login@domain.com
remotename login@domain.com

and chap file
# client        server  secret                  IP addresses
login@domain.com * password *

if you specify a login in different manner(without @) you'll get a segmentation fault, I didn't have much time to implement proper error checking, so it's like a band aid =).