Pull Data From Web Link To Dataframe
I have a weblink: url = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?symbolCode=1270&symbol=RELCAPITAL&symbol=RELCAPITAL&ins
Solution 1:
You can use (create a helper browser):
import urllib.request
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
url = "https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?symbolCode=1270&symbol=RELCAPITAL&symbol=RELCAPITAL&instrument=-&date=-&segmentLink=17&symbolCount=2&segmentLink=17"
headers={'User-Agent':user_agent,}
request=urllib.request.Request(url,None,headers)
response = urllib.request.urlopen(request)
data = response.read()
df=pd.read_html(data)[1]
print(df.head())
CALLS \
Chart OI Chng in OI Volume IV LTP Net Chng BidQty BidPrice
0NaN------3750032.451NaN------3750023.902NaN------3750015.353NaN15000---24.00-375006.654NaN46500-510.594.00-8.0015004.00
... PUTS \
AskPrice ... BidPrice AskPrice AskQty Net Chng LTP IV Volume
052.55 ... -1.203000----151.75 ... -1.203000----240.20 ... 1.001.1015000.601.10168.4621320.25 ... 2.502.5515000.602.00150.324745.00 ... 5.356.0090003.306.10147.49115
Chng in OI OI Chart
0--NaN1--NaN2-10500135000NaN3-22500192000NaN4-34500292500NaN[5 rows x 23 columns]
Solution 2:
Using requests, you would do:
import pandas as pd
from requests import Session
s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
'AppleWebKit/537.36 (KHTML, like Gecko) '\
'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)
URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'params = {'symbolCode':940,
'symbol':'DHFL',
'instrument': 'OPTSTK',
'date': '-',
'segmentLink': 17
}
res = s.get(URL, params=params)
df = pd.read_html(res.content)[1]
Solution 3:
import pandas as pd
from requests import Session
#############################################
pd.set_option('display.max_rows', 500000)
pd.set_option('display.max_columns', 100)
pd.set_option('display.width', 50000)
#############################################
s = Session()
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) '\
'AppleWebKit/537.36 (KHTML, like Gecko) '\
'Chrome/75.0.3770.80 Safari/537.36'}
# Add headers
s.headers.update(headers)
URL = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp'
params = {'symbolCode':940,'symbol':'DHFL','instrument': 'OPTSTK','date': '-','segmentLink': 17}
res = s.get(URL, params=params)
df = pd.read_html(res.content)[1]
df.columns = df.columns.droplevel(-1)
df = df.iloc[2:len(df)-1].reset_index(drop=True)
df.columns = ['C_Chart','C_OI','C_Chng_in_OI','C_Volume','C_IV','C_LTP','C_Net_Chng','C_BidQty','C_BidPrice','C_AskPrice','C_AskQty','Strike_Price','P_BidQty','P_BidPrice','P_AskPrice','P_AskQty','P_Net_Chng','P_LTP','P_IV','P_Volume','P_Chng_in_OI','P_OI','P_Chart']
df = df[['C_LTP','C_BidQty','C_BidPrice','Strike_Price']]
print(df)
Solution 4:
Here is one way to pull the table data.
import requests
import pandas as pd
url = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?symbolCode=1270&symbol=RELCAPITAL&symbol=RELCAPITAL&instrument=-&date=-&segmentLink=17&symbolCount=2&segmentLink=17'
r = requests.get(url)
data = pd.read_html(r.content, header=0)
df = pd.DataFrame(data[1])
print(df)
Another way is behind:
import requests
import pandas as pd
from bs4 import BeautifulSoup
url = 'https://www.nseindia.com/live_market/dynaContent/live_watch/option_chain/optionKeys.jsp?symbolCode=1270&symbol=RELCAPITAL&symbol=RELCAPITAL&instrument=-&date=-&segmentLink=17&symbolCount=2&segmentLink=17'
r = requests.get(url)
data = r.content
soup = BeautifulSoup(r.content,'lxml')
data = []
table = soup.find('table', attrs=dict(id="octable"))
rows = table.find_all('tr')
for row in rows:
cols = row.find_all('td')
ifbool(cols):
cols = [ele.text.strip() for ele in cols]
data.append([ele for ele in cols #if ele
]) # Get rid of empty valueselse:
cols = row.find_all('th')
cols_2 = []
for ele in cols:
e = ele.text.strip()
cols_2.append(e)
colspan = int(ele.attrs.get('colspan', 0))
ifbool(colspan):
for i inrange(1, colspan):
cols_2.append('')
data.append(cols_2)
print(data)
df=pd.DataFrame(data)
print(df)
Post a Comment for "Pull Data From Web Link To Dataframe"