关于被动式扫描器的想法与实践

前言:

起初是看到猪猪侠的ppt觉得牛逼(实际上一年前就有这个想法了,然而一直没有动手。)

过程

代理拦截

既然是被动式扫描器,那必须是要有代理的,本来有两个想法:

1. 使用burp然后开发插件
2. 直接用Python

当然,我后来选择了直接用python。其一是人懒不想在学jpy了,第二就是方便。
代理的话我只知道一个库,那就是mitmproxy这个库了,所以我也只能使用这个库。不得不说python第三方库确实强大。

首先是主类,用于加载配置项,以及运行,代码我放在了run.py里面:

class ProxyStart():
def __init__(self,option,obj):
    self.opts = options.Options()
    for i in option:
        self.__addOption(i)
    self.cf = config.ProxyConfig(self.opts)
    self.server = ProxyServer(config=self.cf)
    self.master = Master(opts = self.opts)
    self.master.server = self.server
    self.master.addons.add(obj)
def __addOption(self,*args):
    self.opts.add_option(args[0][0],args[0][1],args[0][2],args[0][3])
def run(self):
    self.master.run()

然后是运行函数,因为直接调用的话看起来并不是很美观,所以我就写了一个函数来执行,如下:

def server_run():
conf = [('listen_host',str,'127.0.0.1','this is host'),
        ('listen_proxy',int,8080,'this is proxy'),
        ('mode',str,'regular','this is mode'),
        ("body_size_limit",int,100000,"this is response size")]
start = ProxyStart(conf,filterRq())
start.run()

以及我们的过滤器(毕竟不能把所有东西都拦截下来)

class getHttp():
def __init__(self,f):
    self.flow = f
    self.header = dict()
    if(str(self.flow.response.headers['Content-Type']).split(';')[0] not in config.ContentType):
        self.header['method'] = self.__getMethod()
        self.header['url'] = self.__getUrl()
        self.header['Referer'] = self.__getReferer()
        self.header['cookie'] = self.__getCookie()
        self.header['Accept'] = self.__getAccept()
        self.header['data'] = self.__getData()
        self.header['Content-Type'] = self.__getContentType()
        res = SqlMapApi(config.sqlmapapi_url,self.header['url'],self.header['cookie'],self.header['Referer'],self.header['data'])
        taskid = res.getTaskId()
        if str(self.header['method']).upper() == 'GET':
            res.startScan_G(taskid)
        else:
            res.startScan_P(taskid)
        print(self.header)
def __getMethod(self):
    return self.flow.request.method
def __getUrl(self):
    return self.flow.request.url
def __getReferer(self):
    if('Referer' in self.flow.request.headers):
        return self.flow.request.headers['Referer']
    else:
        return ''
def __getCookie(self):
    if('Cookie' in self.flow.request.headers['Cookie']):
        return self.flow.request.headers['Cookie']
    else:
        return ''
def __getData(self):
    if(str(self.flow.request.method).upper() != 'GET'):
        return bytes(self.flow.request.content).decode('utf-8')
    else:
        return ''
def __getAccept(self):
    return self.flow.request.headers['Accept']
def __getContentType(self):
    return self.flow.response.headers['Content-Type']

在过滤类中调用:

class filterRq():
def request(self,f):
    getHttp(f)

def response(self,f):
    getHttp(f)

扫描

扫描的话,我暂时只是将他发送到sqlmap上面去跑了(PS:毕竟sqlmapapi可以分布式)

class SqlMapApi:
def __init__(self,sqlurl,url,cookie,referer,data=''):
    self.sqlurl = str(sqlurl)
    self.url = str(url)
    self.cookie = str(cookie)
    self.referer = str(referer)
    self.data = str(data)
def getTaskId(self):
    taskid = requests.get(url ='http://'+self.sqlurl + '/task/new')
    taskid = json.loads(taskid.text)
    return str(taskid['taskid'])
def startScan_P(self,taskid):
    start = requests.post(url = "http://"+self.sqlurl + '/scan/' + taskid + '/start',data=json.dumps({"url":self.url,"data":self.data,"referfer":self.referer,"cookie":self.cookie}),headers={"Content-Type":"application/json"})
    if json.loads(start.text)["success"] == True:
        return True
    else:
        return False
def startScan_G(self,taskid):
    start = requests.post(url = "http://"+self.sqlurl + '/scan/' + taskid + '/start',data=json.dumps({"url":self.url,"referfer":self.referer,"cookie":self.cookie}),timeout=5,headers={"Content-Type":"application/json"})
    if json.loads(start.text)["success"] == True:
        return True
    else:
        return False
def getStatus(self,taskid):
    r = requests.get(url='http://'+self.sqlurl + taskid + '/status')
    if json.loads(r.text)['success'] == True:
        if json.loads(r.text)['status'] != 'running':
            return True
        else:
            return 'running'
    else:
        return False
def getData(self,taskid):
    data = json.loads(requests.get(url='http://'+self.sqlurl + taskid + '/data').text)['data']
    if data is None:
        return False
    else:
        return data

显示结果

emmm,这个的话没写,需要去sqlmapapi里面查看,不过大家可以在这个基础上扩展开发

GitHub

https://github.com/XiaoTouMingyo/ProxySqlMap
Blog:www.tysec.org

本文链接:

http://f4ckweb.top/index.php/archives/28/
1 + 2 =
快来做第一个评论的人吧~