0%

CTF-脚本基础

python脚本基础

requests库使用

1
2
3
4
5
6
7
8
9
10
import requests

url="http://www.baidu.com"

req=requests.get(url)
req.encoding='utf-8'#修改编码
print(req.text)#输出字符串
print(req.content)#输出字节类型
把字节转化成字符串
print(req.content.decode('utf-8'))

requests模块介绍

请求-响应都是对象

请求:get、post方法进行http请求

User-Agent为浏览器标识,网站根据User-Agent的值判断是否是浏览器的请求,前面在学习python爬虫的时候,就需要伪装浏览器进行内容的爬取。

1
2
headers={"User-Agent":"xxx"}
request.get(url,headers=headers)

响应:属性获取响应的内容。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
响应状态码
req.status_code


响应-请求头
req.request.headers


响应头
req.headers


响应-请求cookie
req.request._cookies

响应cookie
req.cookies

python与burp联动代理

python设置代理:

1
2
3
4
5
6
7
8
import requests

url="http://www.baidu.com"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
req=requests.get(url,proxies=proxies)

带参数的HTTP GET请求

1、url直接设置提交参数

1
2
3
4
5
6
7
8
import requests

url="http://www.baidu.com?wd=helloworld"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
req=requests.get(url,proxies=proxies)

2、通过params字典提交参数

1
2
3
4
5
6
7
8
import requests
params={"wd":"helloworld"}
url="http://www.baidu.com"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
req=requests.get(url,proxies=proxies,params=params)

带参数的 HTTP POST请求

1
2
3
4
5
6
7
8
import requests
data = {"username":"admin","password":"123456","Login":"Login"}
url=".../login.php"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
req=requests.post(url,proxies=proxies,data=data)

对比python post提交和页面post请求的内容区别

1
python提交少了cookie信息

Cookie作用与使用

1
保存用户状态信息

利用http请求头中的cookie头携带

1
2
3
4
5
6
7
8
9
import requests
data = {"username":"admin","password":"123456","Login":"Login"}
url=".../login.php"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
headers={"Cookie":"xxx"}
req=requests.post(url,proxies=proxies,data=data,headers=headers)

为了更方便在python中使用cookie,requests模块中提供了保持会话的方法session()

1
2
3
4
5
6
7
8
9
10
11
import requests
url=".../login.php"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
s=request.session()
req1=s.get(url,proxies=proxies)#保存cookie
print(req1.headers)
data={"username":"admin","password":"123456","Login":"Login"}
req2=s.post(url,proxies=proxies,data=data)

默认requests超时比较长,此时可缩短超时限制

1
2
3
4
5
6
7
8
9
10
11
import requests
url=".../login.php"
proxies={
"http":"http://127.0.0.1:8080",
"https":"https://127.0.0.1:8080"
}
s=request.session()
req1=s.get(url,proxies=proxies)#保存cookie
print(req1.headers)
data={"username":"admin","password":"123456","Login":"Login"}
req2=s.post(url,proxies=proxies,data=data,timeout=3)

Python SQL注入自动化检测

1
发送可能构造错误的SQL语句,若发生错误,存在SQL注入漏洞。

关键词 SQL syntax

1
2
3
4
5
6
7
8
9
import requests

url=""
params={"id":"'"}
req=requests.get(url,params=params)
if req.text.find("SQL syntax") !=-1:
print("find sql inject")
else:
print('no')

Python XSS自动化检查

1
2
检测思路:
发送XSS Payload,筛选是否具有payload,那么存在XSS。

Python XSS检测代码:

1
2
3
4
5
6
7
8
9
10
11
import requests

url=""
payload="<script>alert('xss')</script>"

params={"a":payload}
req=requests.get(url,params=params)
if req.text.find(payload)!=-1:
print("xss found")
else:
print("no xss")

https://github.com/payloadbox/xss-payload-list

下载XSS Payload List,新建文件,替换payload:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import requests
import sys
url=""

with open('xss_payload.txt','r',encoding='utf-8')
payload_list=f.readlines()
for payload in payload_list:
payload.strip()#去除空格
params={"a":payload}
req=requests.get(url,params=params)
if req.text.find(payload)!=-1:
print("xss found")
sys.exit()#找到退出
else:
print("no xss")

Python源码泄漏自动化挖掘

常见源码泄漏

1
.git、.svn、.DS_Store以及backup.zip等目录或文件
1
2
3
4
5
6
7
8
9
10
11
12
import requests

payloads=['.git','.svn','.DS_Store','backup.zip']
url=""
for payload in payloads:
req=requests.get(url+ "/"+payload)
if req.status_code==200 :
print("yes:"+req.request.url )
break
else:
continue

添加txt字典遍历

1
2
3
4
5
6
7
8
9
10
11
12
13
14
import requests

payloads=['.git','.svn','.DS_Store','backup.zip']
with open("urls.txt","r") as f :
url_list=f.readlines()
for url in url_list:
for payload in payloads:
req=requests.get(url+ "/"+payload)
if req.status_code=='200':
print("yes")
break
else:
continue

-------------本文结束感谢您的阅读-------------

欢迎关注我的其它发布渠道