Merge remote-tracking branch 'upstream/master'

This commit is contained in:
Daniel Raper
2024-06-02 22:24:30 +01:00
96 changed files with 1669 additions and 260 deletions

View File

@@ -17,7 +17,7 @@ repos:
hooks:
- id: codespell
args:
- --ignore-words-list=hass,alot,datas,dof,dur,farenheit,hist,iff,ines,ist,lightsensor,mut,nd,pres,referer,ser,serie,te,technik,ue,uint,visability,wan,wanna,withing,Adresse,termine,adresse,oder,alle,assistent,hart,marz,worthing,linz,celle,vor
- --ignore-words-list=hass,alot,datas,dof,dur,farenheit,hist,iff,ines,ist,lightsensor,mut,nd,pres,referer,ser,serie,te,technik,ue,uint,visability,wan,wanna,withing,Adresse,termine,adresse,oder,alle,assistent,hart,marz,worthing,linz,celle,vor,leibnitz
- --skip="./.*,*.csv,*.json"
- --quiet-level=2
exclude_types: [csv, json]

57
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,57 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Test All Sources",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/custom_components/waste_collection_schedule/waste_collection_schedule/test/test_sources.py",
"console": "integratedTerminal",
"args": [
"-t"
]
},
{
"name": "Test Current Source (.py)",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/custom_components/waste_collection_schedule/waste_collection_schedule/test/test_sources.py",
"console": "integratedTerminal",
"args": [
"-s",
"${fileBasenameNoExtension}",
"-l",
"-i",
"-t"
]
},
{
"name": "Test All ICS Sources",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/custom_components/waste_collection_schedule/waste_collection_schedule/test/test_sources.py",
"console": "integratedTerminal",
"args": [
"-I",
"-t"
]
},
{
"name": "Test Current ICS Source (.yaml)",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/custom_components/waste_collection_schedule/waste_collection_schedule/test/test_sources.py",
"console": "integratedTerminal",
"args": [
"-y",
"${fileBasenameNoExtension}",
"-l",
"-i",
"-t"
]
}
]
}

View File

@@ -260,6 +260,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Lackendorf](/doc/source/citiesapps_com.md) / lackendorf.at
- [Langau](/doc/source/citiesapps_com.md) / langau.at
- [Langenrohr](/doc/source/citiesapps_com.md) / langenrohr.gv.at
- [Leibnitz](/doc/source/citiesapps_com.md) / leibnitz.at
- [Leithaprodersdorf](/doc/source/citiesapps_com.md) / leithaprodersdorf.at
- [Lendorf](/doc/ics/muellapp_com.md) / muellapp.com
- [Leoben](/doc/ics/muellapp_com.md) / muellapp.com
@@ -510,6 +511,12 @@ Waste collection schedules in the following formats and countries are supported.
- [RenoWeb](/doc/source/renoweb_dk.md) / renoweb.dk
</details>
<details>
<summary>Finland</summary>
- [Kiertokapula Finland](/doc/source/kiertokapula_fi.md) / kiertokapula.fi
</details>
<details>
<summary>France</summary>
@@ -708,6 +715,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Heidelberg](/doc/ics/gipsprojekt_de.md) / heidelberg.de
- [Heilbronn Entsorgungsbetriebe](/doc/source/heilbronn_de.md) / heilbronn.de
- [Heinz-Entsorgung (Landkreis Freising)](/doc/ics/heinz_entsorgung_de.md) / heinz-entsorgung.de
- [Herten (durth-roos.de)](/doc/ics/herten_de.md) / herten.de
- [Hohenlohekreis](/doc/source/app_abfallplus_de.md) / Abfall+ App: hokwaste
- [Holtgast (MyMuell App)](/doc/source/jumomind_de.md) / mymuell.de
- [HubertSchmid Recycling und Umweltschutz GmbH](/doc/source/api_hubert_schmid_de.md) / hschmid24.de/BlaueTonne
@@ -772,6 +780,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Kreis Vechta](/doc/source/app_abfallplus_de.md) / Abfall+ App: awvapp
- [Kreis Viersen](/doc/source/abfallnavi_de.md) / kreis-viersen.de
- [Kreis Vorpommern-Rügen](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallappvorue
- [Kreis Waldshut](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallwecker
- [Kreis Weißenburg-Gunzenhausen](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallappwug
- [Kreis Wesermarsch](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallappgib
- [Kreis Würzburg](/doc/source/app_abfallplus_de.md) / Abfall+ App: teamorange
@@ -854,6 +863,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Landratsamt Bodenseekreis](/doc/ics/bodenseekreis_de.md) / bodenseekreis.de
- [Landratsamt Dachau](/doc/source/awido_de.md) / landratsamt-dachau.de
- [Landratsamt Main-Tauber-Kreis](/doc/source/c_trace_de.md) / main-tauber-kreis.de
- [Landratsamt Regensburg](/doc/source/awido_de.md) / landkreis-regensburg.de
- [Landratsamt Traunstein](/doc/source/abfall_io.md) / traunstein.com
- [Landratsamt Unterallgäu](/doc/source/abfall_io.md) / landratsamt-unterallgaeu.de
- [Landshut](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallappla
@@ -972,11 +982,10 @@ Waste collection schedules in the following formats and countries are supported.
- [VIVO Landkreis Miesbach](/doc/source/abfall_io.md) / vivowarngau.de
- [Volkmarsen (MyMuell App)](/doc/source/jumomind_de.md) / mymuell.de
- [Vöhringen (MyMuell App)](/doc/source/jumomind_de.md) / mymuell.de
- [Waldshut](/doc/source/app_abfallplus_de.md) / Abfall+ App: abfallwecker
- [Waldshut](/doc/source/app_abfallplus_de.md) / Abfall+ App: unterallgaeu
- [WBO Wirtschaftsbetriebe Oberhausen](/doc/source/abfallnavi_de.md) / wbo-online.de
- [Wegberg (MyMuell App)](/doc/source/jumomind_de.md) / mymuell.de
- [Wermelskirchen](/doc/source/wermelskirchen_de.md) / wermelskirchen.de
- [Wermelskirchen (Service Down)](/doc/source/wermelskirchen_de.md) / wermelskirchen.de
- [Westerholt (MyMuell App)](/doc/source/jumomind_de.md) / mymuell.de
- [Westerwaldkreis](/doc/source/app_abfallplus_de.md) / Abfall+ App: wabapp
- [WGV Recycling GmbH](/doc/source/awido_de.md) / wgv-quarzbichl.de
@@ -1203,18 +1212,22 @@ Waste collection schedules in the following formats and countries are supported.
- [BCP Council](/doc/source/bcp_gov_uk.md) / bcpcouncil.gov.uk
- [Bedford Borough Council](/doc/source/bedford_gov_uk.md) / bedford.gov.uk
- [Binzone](/doc/source/binzone_uk.md) / southoxon.gov.uk
- [Birmingham City Council](/doc/source/birmingham_gov_uk.md) / birmingham.gov.uk
- [Blackburn with Darwen Borough Council](/doc/source/blackburn_gov_uk.md) / blackburn.gov.uk
- [Blackpool Council](/doc/source/blackpool_gov_uk.md) / blackpool.gov.uk
- [Borough Council of King's Lynn & West Norfolk](/doc/source/west_norfolk_gov_uk.md) / west-norfolk.gov.uk
- [Borough of Broxbourne Council](/doc/source/broxbourne_gov_uk.md) / broxbourne.gov.uk
- [Bracknell Forest Council](/doc/source/bracknell_forest_gov_uk.md) / selfservice.mybfc.bracknell-forest.gov.uk
- [Bradford Metropolitan District Council](/doc/source/bradford_gov_uk.md) / bradford.gov.uk
- [Braintree District Council](/doc/source/braintree_gov_uk.md) / braintree.gov.uk
- [Breckland Council](/doc/source/breckland_gov_uk.md) / breckland.gov.uk/mybreckland
- [Bristol City Council](/doc/source/bristol_gov_uk.md) / bristol.gov.uk
- [Broadland District Council](/doc/source/south_norfolk_and_broadland_gov_uk.md) / area.southnorfolkandbroadland.gov.uk
- [Bromsgrove City Council](/doc/source/bromsgrove_gov_uk.md) / bromsgrove.gov.uk
- [Broxtowe Borough Council](/doc/source/broxtowe_gov_uk.md) / broxtowe.gov.uk
- [Buckinghamshire Waste Collection - Former Chiltern, South Bucks or Wycombe areas](/doc/source/chiltern_gov_uk.md) / chiltern.gov.uk
- [Burnley Council](/doc/source/burnley_gov_uk.md) / burnley.gov.uk
- [Bury Council](/doc/source/bury_gov_uk.md) / bury.gov.uk
- [Cambridge City Council](/doc/source/cambridge_gov_uk.md) / cambridge.gov.uk
- [Canterbury City Council](/doc/source/canterbury_gov_uk.md) / canterbury.gov.uk
- [Cardiff Council](/doc/source/cardiff_gov_uk.md) / cardiff.gov.uk
@@ -1236,6 +1249,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Derby City Council](/doc/source/derby_gov_uk.md) / derby.gov.uk
- [Dudley Metropolitan Borough Council](/doc/source/dudley_gov_uk.md) / dudley.gov.uk
- [Durham County Council](/doc/source/durham_gov_uk.md) / durham.gov.uk
- [East Ayrshire Council](/doc/source/east_ayrshire_gov_uk.md) / east-ayrshire.gov.uk
- [East Cambridgeshire District Council](/doc/source/eastcambs_gov_uk.md) / eastcambs.gov.uk
- [East Devon District Council](/doc/source/eastdevon_gov_uk.md) / eastdevon.gov.uk
- [East Herts Council](/doc/source/eastherts_gov_uk.md) / eastherts.gov.uk
@@ -1255,6 +1269,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Flintshire](/doc/source/flintshire_gov_uk.md) / flintshire.gov.uk
- [Fylde Council](/doc/source/fylde_gov_uk.md) / fylde.gov.uk
- [Gateshead Council](/doc/source/gateshead_gov_uk.md) / gateshead.gov.uk
- [Gedling Borough Council (unofficial)](/doc/ics/gedling_gov_uk.md) / github.com/jamesmacwhite/gedling-borough-council-bin-calendars
- [Glasgow City Council](/doc/source/glasgow_gov_uk.md) / glasgow.gov.uk
- [Guildford Borough Council](/doc/source/guildford_gov_uk.md) / guildford.gov.uk
- [Gwynedd](/doc/source/gwynedd_gov_uk.md) / gwynedd.gov.uk
@@ -1296,6 +1311,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Newcastle City Council](/doc/source/newcastle_gov_uk.md) / community.newcastle.gov.uk
- [Newcastle Under Lyme Borough Council](/doc/source/newcastle_staffs_gov_uk.md) / newcastle-staffs.gov.uk
- [Newport City Council](/doc/source/newport_gov_uk.md) / newport.gov.uk
- [North Ayrshire Council](/doc/source/north_ayrshire_gov_uk.md) / north-ayrshire.gov.uk
- [North Herts Council](/doc/source/northherts_gov_uk.md) / north-herts.gov.uk
- [North Kesteven District Council](/doc/source/north_kesteven_org_uk.md) / n-kesteven.org.uk
- [North Lincolnshire Council](/doc/source/northlincs_gov_uk.md) / northlincs.gov.uk
@@ -1313,6 +1329,7 @@ Waste collection schedules in the following formats and countries are supported.
- [Reading Council](/doc/source/reading_gov_uk.md) / reading.gov.uk
- [Redbridge Council](/doc/source/redbridge_gov_uk.md) / redbridge.gov.uk
- [Reigate & Banstead Borough Council](/doc/source/reigatebanstead_gov_uk.md) / reigate-banstead.gov.uk
- [Renfrewshire Council](/doc/source/renfrewshire_gov_uk.md) / renfrewshire.gov.uk
- [Rhondda Cynon Taf County Borough Council](/doc/source/rctcbc_gov_uk.md) / rctcbc.gov.uk
- [Richmondshire District Council](/doc/source/richmondshire_gov_uk.md) / richmondshire.gov.uk
- [Rotherham Metropolitan Borough Council](/doc/source/rotherham_gov_uk.md) / rotherham.gov.uk
@@ -1383,6 +1400,7 @@ Waste collection schedules in the following formats and countries are supported.
<summary>United States of America</summary>
- [Albuquerque, New Mexico, USA](/doc/source/recyclecoach_com.md) / recyclecoach.com/cities/usa-nm-city-of-albuquerque
- [City of Austin, TX](/doc/ics/recollect.md) / austintexas.gov
- [City of Bloomington](/doc/ics/recollect.md) / bloomington.in.gov
- [City of Cambridge](/doc/ics/recollect.md) / cambridgema.gov
- [City of Gastonia, NC](/doc/ics/recollect.md) / gastonianc.gov

View File

@@ -99,7 +99,17 @@ async def async_setup_platform(hass, config, async_add_entities, discovery_info=
source_index = config[CONF_SOURCE_INDEX]
if not isinstance(source_index, list):
source_index = [source_index]
aggregator = CollectionAggregator([api.get_shell(i) for i in source_index])
shells = []
for i in source_index:
shell = api.get_shell(i)
if shell is None:
raise ValueError(
f"source_index {i} out of range (0-{len(api.shells) - 1}) please check your sensor configuration"
)
shells.append(shell)
aggregator = CollectionAggregator(shells)
entities = []

View File

@@ -34,6 +34,10 @@ class CollectionBase(dict): # inherit from dict to enable JSON serialization
def set_picture(self, picture: str):
self["picture"] = picture
def set_date(self, date: datetime.date):
self._date = date
self["date"] = date.isoformat()
class Collection(CollectionBase):
def __init__(

View File

@@ -153,7 +153,7 @@ SUPPORTED_SERVICES = {
"de.abfallwecker": [
"Rottweil",
"Tuttlingen",
"Waldshut",
"Kreis Waldshut",
"Prignitz",
"Nordsachsen",
],
@@ -352,6 +352,7 @@ def random_hex(length: int = 1) -> str:
API_BASE = "https://app.abfallplus.de/{}"
API_ASSISTANT = API_BASE.format("assistent/{}") # ignore: E501
USER_AGENT = "{}/9.1.0.0 iOS/17.5 Device/iPhone Screen/1170x2532"
ABFALLARTEN_H2_SKIP = ["Sondermüll"]
def extract_onclicks(
@@ -435,17 +436,12 @@ class AppAbfallplusDe:
method="post",
headers=None,
):
if headers:
headers["User-Agent"] = USER_AGENT.format(
MAP_APP_USERAGENTS.get(self._app_id, "%")
)
if headers is None:
headers = {}
else:
headers = {
"User-Agent": USER_AGENT.format(
MAP_APP_USERAGENTS.get(self._app_id, "%")
)
}
headers["User-Agent"] = USER_AGENT.format(
MAP_APP_USERAGENTS.get(self._app_id, "%")
)
if method not in ("get", "post"):
raise Exception(f"Method {method} not supported.")
@@ -778,6 +774,25 @@ class AppAbfallplusDe:
r.raise_for_status()
soup = BeautifulSoup(r.text, features="html.parser")
self._f_id_abfallart = []
for to_skip in ABFALLARTEN_H2_SKIP:
to_skip_element = soup.find("h2", text=to_skip)
div_to_skip = (
to_skip_element.find_parent("div") if to_skip_element else None
)
if div_to_skip:
for input in to_skip_element.find_parent("div").find_all(
"input", {"name": "f_id_abfallart[]"}
):
if compare(input.text, self._region_search, remove_space=True):
id = input.attrs["id"].split("_")[-1]
self._f_id_abfallart.append(input.attrs["value"])
self._needs_subtitle.append(id)
if id.isdigit():
self._needs_subtitle.append(str(int(id) - 1))
break
# remove sondermuell h2 from soup
div_to_skip.decompose()
for input in soup.find_all("input", {"name": "f_id_abfallart[]"}):
if input.attrs["value"] == "0":
if "id" not in input.attrs:
@@ -790,6 +805,7 @@ class AppAbfallplusDe:
continue
self._f_id_abfallart.append(input.attrs["value"])
self._f_id_abfallart = list(set(self._f_id_abfallart))
self._needs_subtitle = list(set(self._needs_subtitle))
def validate(self):

View File

@@ -236,6 +236,7 @@ SERVICE_MAP = [
{"title": "Lackendorf", "url": "https://www.lackendorf.at", "country": "at"},
{"title": "Langau", "url": "http://www.langau.at", "country": "at"},
{"title": "Langenrohr", "url": "https://www.langenrohr.gv.at", "country": "at"},
{"title": "Leibnitz", "url": "https://www.leibnitz.at", "country": "at"},
{
"title": "Leithaprodersdorf",
"url": "http://www.leithaprodersdorf.at",

View File

@@ -3,8 +3,6 @@ from alive_progress import alive_bar
import time
import requests
from bs4 import BeautifulSoup
#import threading
import sys
from requests.exceptions import HTTPError
from http import HTTPStatus
import argparse

View File

@@ -1,5 +1,3 @@
import requests
from bs4 import BeautifulSoup
from datetime import datetime
from waste_collection_schedule import Collection # type: ignore[attr-defined]

View File

@@ -28,7 +28,6 @@ ICON_MAP = {
"Glass": "mdi:bottle-soda",
"Bioabfall": "mdi:leaf",
"Altpapier": "mdi:package-variant",
"Altpapier": "mdi:package-variant",
"Altpapier Siemer": "mdi:package-variant",
"Altpapier Pamo": "mdi:package-variant",
"Gelbe Tonne": "mdi:recycle",

View File

@@ -1,5 +1,6 @@
import datetime
import json
import urllib3
import pytz
import requests
@@ -9,9 +10,9 @@ from waste_collection_schedule import Collection # type: ignore[attr-defined]
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
import urllib3
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Abfallwirtschaft Landkreis Wolfenbüttel"
DESCRIPTION = "Source for ALW Wolfenbüttel."

View File

@@ -7,6 +7,11 @@ TITLE = "Apps by Abfall+"
DESCRIPTION = "Source for Apps by Abfall+."
URL = "https://www.abfallplus.de/"
TEST_CASES = {
"de.k4systems.abfallappnf Ahrenviöl alle Straßen": {
"app_id": "de.k4systems.abfallappnf",
"city": "Ahrenviöl",
"strasse": "Alle Straßen",
},
"de.albagroup.app Braunschweig Hauptstraße 7A ": {
"app_id": "de.albagroup.app",
"city": "Braunschweig",

View File

@@ -108,11 +108,17 @@ class Source:
date_soup = bin_text.find(
"span", id=re.compile(r"CollectionDayLookup2_Label_\w*_Date")
)
if not date_soup or " " not in date_soup.text.strip():
if not date_soup or (
" " not in date_soup.text.strip()
and date_soup.text.strip().lower() != "today"
):
continue
date_str: str = date_soup.text.strip()
try:
date = datetime.strptime(date_str.split(" ")[1], "%d/%m/%Y").date()
if date_soup.text.strip().lower() == "today":
date = datetime.now().date()
else:
date = datetime.strptime(date_str.split(" ")[1], "%d/%m/%Y").date()
except ValueError:
continue

View File

@@ -44,7 +44,7 @@ class Source:
ics_urls.append(href)
if not ics_urls:
raise Exception(f"ics url not found")
raise Exception("ics url not found")
entries = []
for ics_url in ics_urls:

View File

@@ -222,6 +222,11 @@ SERVICE_MAP = [
"url": "https://www.landratsamt-roth.de/",
"service_id": "roth",
},
{
"title": "Landratsamt Regensburg",
"url": "https://www.landkreis-regensburg.de/",
"service_id": "lra-regensburg",
},
]
TEST_CASES = {

View File

@@ -1,6 +1,7 @@
import requests
import urllib3
from html.parser import HTMLParser
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.ICS import ICS
@@ -8,9 +9,8 @@ from waste_collection_schedule.service.ICS import ICS
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
import urllib3
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Abfallwirtschaft Neckar-Odenwald-Kreis"

View File

@@ -10,8 +10,9 @@ from waste_collection_schedule import Collection # type: ignore[attr-defined]
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Basingstoke and Deane Borough Council"
DESCRIPTION = "Source for basingstoke.gov.uk services for Basingstoke and Deane Borough Council, UK."

View File

@@ -1,7 +1,6 @@
from datetime import date, datetime
from datetime import datetime
from typing import List
import requests
from waste_collection_schedule import Collection
# Include work around for SSL UNSAFE_LEGACY_RENEGOTIATION_DISABLED error

View File

@@ -34,16 +34,8 @@ class Source:
self._uprn = str(uprn).zfill(12)
def fetch(self):
s = requests.Session()
# Set up session
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
session_request = s.get(
f"https://mybexley.bexley.gov.uk/apibroker/domain/mybexley.bexley.gov.uk?_={timestamp}",
headers=HEADERS,
)
# This request gets the session ID
sid_request = s.get(
"https://mybexley.bexley.gov.uk/authapi/isauthenticated?uri=https%3A%2F%2Fmybexley.bexley.gov.uk%2Fservice%2FWhen_is_my_collection_day&hostname=mybexley.bexley.gov.uk&withCredentials=true",
@@ -53,9 +45,9 @@ class Source:
sid = sid_data['auth-session']
# This request retrieves the schedule
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
payload = {
"formValues": { "What is your address?": {"txtUPRN": {"value": self._uprn}}}
"formValues": {"What is your address?": {"txtUPRN": {"value": self._uprn}}}
}
schedule_request = s.post(
f"https://mybexley.bexley.gov.uk/apibroker/runLookup?id=61320b2acf8a3&repeat_against=&noRetry=false&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&_={timestamp}&sid={sid}",

View File

@@ -1,6 +1,5 @@
from html.parser import HTMLParser
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.ICS import ICS
from waste_collection_schedule.service.SSLError import get_legacy_session

View File

@@ -0,0 +1,97 @@
import re
from datetime import datetime
import requests
from bs4 import BeautifulSoup
from dateutil.parser import parse
from dateutil.relativedelta import relativedelta
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Birmingham City Council"
DESCRIPTION = "Source for birmingham.gov.uk services for Birmingham, UK."
URL = "https://birmingham.gov.uk"
TEST_CASES = {
"Cherry Tree Croft": {"uprn": 100070321799, "postcode": "B27 6TF"},
"Ludgate Loft Apartments": {"uprn": 10033389698, "postcode": "B3 1DW"},
"Windermere Road": {"uprn": "100070566109", "postcode": "B13 9JP"},
"Park Hill": {"uprn": "100070475114", "postcode": "B13 8DS"},
}
HEADERS = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36",
}
API_URLS = {
"get_session": "https://www.birmingham.gov.uk/xfp/form/619",
"collection": "https://www.birmingham.gov.uk/xfp/form/619",
}
ICON_MAP = {
"Household Collection": "mdi:trash-can",
"Recycling Collection": "mdi:recycle",
"Green Recycling Chargeable Collections": "mdi:leaf",
}
class Source:
def __init__(self, uprn: str, postcode: str):
self._uprn = uprn
self._postcode = postcode
def fetch(self):
entries: list[Collection] = []
session = requests.Session()
session.headers.update(HEADERS)
token_response = session.get(API_URLS["get_session"])
soup = BeautifulSoup(token_response.text, "html.parser")
token = soup.find("input", {"name": "__token"}).attrs["value"]
if not token:
raise ValueError(
"Could not parse CSRF Token from initial response. Won't be able to proceed."
)
form_data = {
"__token": token,
"page": "491",
"locale": "en_GB",
"q1f8ccce1d1e2f58649b4069712be6879a839233f_0_0": self._postcode,
"q1f8ccce1d1e2f58649b4069712be6879a839233f_1_0": self._uprn,
"next": "Next",
}
collection_response = session.post(API_URLS["collection"], data=form_data)
collection_soup = BeautifulSoup(collection_response.text, "html.parser")
for table_row in collection_soup.find(
"table", class_="data-table"
).tbody.find_all("tr"):
collection_type = table_row.contents[0].text
collection_next = table_row.contents[1].text
collection_date = re.findall(r"\(.*?\)", collection_next)
if len(collection_date) != 1:
continue
collection_date_obj = parse(re.sub("[()]", "", collection_date[0])).date()
# since we only have the next collection day, if the parsed date is in the past,
# assume the day is instead next month
if collection_date_obj < datetime.now().date():
collection_date_obj += relativedelta(months=1)
entries.append(
Collection(
date=collection_date_obj,
t=collection_type,
icon=ICON_MAP.get(collection_type, "mdi:help"),
)
)
if not entries:
raise ValueError(
"Could not get collections for the given combination of UPRN and Postcode."
)
return entries

View File

@@ -1,4 +1,5 @@
from datetime import datetime
import urllib3
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.SSLError import get_legacy_session
@@ -26,10 +27,8 @@ API_URL = "https://mybins.blackburn.gov.uk/api/mybins/getbincollectiondays"
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
import urllib3
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class Source:

View File

@@ -0,0 +1,94 @@
from datetime import datetime
import requests
from bs4 import BeautifulSoup
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Bromsgrove City Council"
DESCRIPTION = "Source for bromsgrove.gov.uk services for Bromsgrove, UK."
URL = "https://bromsgrove.gov.uk"
TEST_CASES = {
"Shakespeare House": {"uprn": "10094552413", "postcode": "B61 8DA"},
"The Lodge": {"uprn": 10000218025, "postcode": "B60 2AA"},
"Ceader Lodge": {"uprn": 100120576392, "postcode": "B60 2JS"},
"Finstall Road": {"uprn": 100120571971, "postcode": "B60 3DE"},
}
HEADERS = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36",
}
API_URLS = {
"collection": "https://bincollections.bromsgrove.gov.uk/BinCollections/Details/",
}
ICON_MAP = {
"Grey": "mdi:trash-can",
"Green": "mdi:recycle",
"Brown": "mdi:leaf",
}
class Source:
def __init__(self, uprn: str, postcode: str):
self._uprn = uprn
self._postcode = "".join(postcode.split()).upper()
def fetch(self):
entries: list[Collection] = []
session = requests.Session()
session.headers.update(HEADERS)
form_data = {"UPRN": self._uprn}
collection_response = session.post(API_URLS["collection"], data=form_data)
# Parse HTML
soup = BeautifulSoup(collection_response.text, "html.parser")
# Find postcode
postcode = "".join(soup.find("h3").text.split()[-2:]).upper()
# Find bins and their collection details
bins = soup.find_all(class_="collection-container")
# Initialize lists to store extracted information
bin_info = []
# Extract information for each bin
for bin in bins:
bin_name = bin.find(class_="heading").text.strip()
bin_color = bin.find("img")["alt"]
collection_dates = []
collection_details = bin.find_all(class_="caption")
for detail in collection_details:
date_string = detail.text.split()[-3:]
collection_date = " ".join(date_string)
collection_dates.append(
datetime.strptime(collection_date, "%d %B %Y").date()
)
bin_info.append(
{
"Bin Name": bin_name,
"Bin Color": bin_color,
"Collection Dates": collection_dates,
}
)
# Check if the postcode matches the one provided, otherwise don't fill in the output
if postcode == self._postcode:
for info in bin_info:
entries.append(
Collection(
date=info["Collection Dates"][0],
t=info["Bin Name"],
icon=ICON_MAP.get(info["Bin Color"], "mdi:help"),
)
)
if not entries:
raise ValueError(
"Could not get collections for the given combination of UPRN and Postcode."
)
return entries

View File

@@ -0,0 +1,107 @@
import datetime
import logging
import requests
from bs4 import BeautifulSoup
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Borough of Broxbourne Council"
DESCRIPTION = "Source for broxbourne.gov.uk services for Broxbourne, UK."
URL = "https://www.broxbourne.gov.uk"
TEST_CASES = {
"Old School Cottage (Domestic Waste Only)": {
"uprn": "148040092",
"postcode": "EN10 7PX",
},
"11 Park Road (All Services)": {"uprn": "148028240", "postcode": "EN11 8PU"},
"11 Pulham Avenue (All Services)": {"uprn": 148024643, "postcode": "EN10 7TA"},
}
API_URLS = {
"get_session": "https://www.broxbourne.gov.uk/bin-collection-date",
"collection": "https://www.broxbourne.gov.uk/xfp/form/205",
}
LOGGER = logging.getLogger(__name__)
ICON_MAP = {
"Domestic": "mdi:trash-can",
"Recycling": "mdi:recycle",
"Green Waste": "mdi:leaf",
"Food": "mdi:food-apple",
}
class Source:
def __init__(self, uprn: str, postcode: str):
self._uprn = uprn
self._postcode = postcode
def fetch(self):
entries: list[Collection] = []
session = requests.Session()
token_response = session.get(API_URLS["get_session"])
soup = BeautifulSoup(token_response.text, "html.parser")
token = soup.find("input", {"name": "__token"}).attrs["value"]
if not token:
raise ValueError(
"Could not parse CSRF Token from initial response. Won't be able to proceed."
)
form_data = {
"__token": token,
"page": "490",
"locale": "en_GB",
"qacf7e570cf99fae4cb3a2e14d5a75fd0d6561058_0_0": self._postcode,
"qacf7e570cf99fae4cb3a2e14d5a75fd0d6561058_1_0": self._uprn,
"next": "Next",
}
collection_response = session.post(API_URLS["collection"], data=form_data)
collection_soup = BeautifulSoup(collection_response.text, "html.parser")
tr = collection_soup.findAll("tr")
# The council API returns no year for the collections
# and so it needs to be calculated to format the date correctly
today = datetime.date.today()
year = today.year
for item in tr[1:]: # Ignore table header row
td = item.findAll("td")
waste_type = td[1].text.rstrip()
# We need to replace characters due to encoding in form
collection_date_text = (
td[0].text.split(" ")[0].replace("\xa0", " ") + " " + str(year)
)
try:
# Broxbourne give an empty date field where there is no collection
collection_date = datetime.datetime.strptime(
collection_date_text, "%a %d %B %Y"
).date()
except ValueError as e:
LOGGER.warning(
f"No date found for wastetype: {waste_type}. The date field in the table is empty or corrupted. Failed with error: {e}"
)
continue
# Calculate the year. As we only get collections a week in advance we can assume the current
# year unless the month is January in December where it will be next year
if (collection_date.month == 1) and (today.month == 12):
collection_date = collection_date.replace(year=year + 1)
entries.append(
Collection(
date=collection_date,
t=waste_type,
icon=ICON_MAP.get(waste_type),
)
)
return entries

View File

@@ -85,7 +85,7 @@ class Source:
def __init__(self, abf_strasse, abf_hausnr):
self._abf_strasse = abf_strasse
self._abf_hausnr = abf_hausnr
self._ics = ICS(offset=1)
self._ics = ICS()
def fetch(self):
dates = []

View File

@@ -0,0 +1,111 @@
import re
from datetime import datetime
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Bury Council"
DESCRIPTION = "Source for bury.gov.uk services for Bury Council, UK."
URL = "https://bury.gov.uk"
TEST_CASES = {
"Test_Address_001": {"postcode": "bl81dd", "address": "2 Oakwood Close"},
"Test_Address_002": {"postcode": "bl8 2sg", "address": "9, BIRKDALE DRIVE"},
"Test_Address_003": {"postcode": "BL8 3DG", "address": "18, slaidburn drive"},
"Test_ID_001": {"id": 649158},
"Test_ID_002": {"id": "593456"},
}
ICON_MAP = {
"brown": "mdi:leaf",
"grey": "mdi:trash-can",
"green": "mdi:package-variant",
"blue": "mdi:bottle-soda-classic",
}
NAME_MAP = {
"brown": "Garden",
"grey": "General",
"green": "Paper/Cardboard",
"blue": "Plastic/Cans/Glass",
}
HEADERS = {
"Accept": "*/*",
"Accept-Language": "en-GB,en;q=0.9",
"Connection": "keep-alive",
"Ocp-Apim-Trace": "true",
"Origin": "https://bury.gov.uk",
"Referer": "https://bury.gov.uk",
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "cross-site",
"Sec-GPC": "1",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36",
}
class Source:
def __init__(self, postcode=None, address=None, id=None):
if id is None and (postcode is None or address is None):
raise ValueError("Postcode and address must be provided")
self._id = str(id).zfill(6) if id is not None else None
self._postcode = postcode
self._address = address
def compare_address(self, address) -> bool:
return (
self._address.replace(",", "").replace(" ", "").upper()
== address.replace(",", "").replace(" ", "").upper()
)
def get_id(self, s):
url = "https://www.bury.gov.uk/app-services/getProperties"
params = {"postcode": self._postcode}
r = s.get(url, headers=HEADERS, params=params)
r.raise_for_status()
data = r.json()
if data["error"] is True:
raise ValueError("Invalid postcode")
for item in data["response"]:
if self.compare_address(item["addressLine1"]):
self._id = item["id"]
break
if self._id is None:
raise ValueError("Invalid address")
def fetch(self):
s = requests.Session()
if self._id is None:
self.get_id(s)
# Retrieve the schedule
params = {"id": self._id}
response = s.get(
"https://www.bury.gov.uk/app-services/getPropertyById",
headers=HEADERS,
params=params,
)
data = response.json()
# Define a regular expression pattern to match ordinal suffixes
ordinal_suffix_pattern = r"(?<=\d)(?:st|nd|rd|th)"
entries = []
for bin_name, bin_info in data["response"]["bins"].items():
# Remove the ordinal suffix from the date string
date_str_without_suffix = re.sub(
ordinal_suffix_pattern, "", bin_info["nextCollection"]
)
entries.append(
Collection(
date=datetime.strptime(
date_str_without_suffix,
"%A %d %B %Y",
).date(),
t=NAME_MAP[bin_name],
icon=ICON_MAP.get(bin_name),
)
)
return entries

View File

@@ -1,6 +1,7 @@
import json
import logging
import requests
import urllib3
from datetime import datetime
from waste_collection_schedule import Collection
@@ -9,9 +10,8 @@ from waste_collection_schedule import Collection
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
import urllib3
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Chesterfield Borough Council"

View File

@@ -1,5 +1,4 @@
import datetime
import ssl
import requests
from waste_collection_schedule import Collection

View File

@@ -35,7 +35,7 @@ class Source:
message = json.loads(r.json()["message"])
entries = []
print(message)
for type in ["Household", "Recycling", "Food"]:
date_str = message[f"{type}Date"]
date = datetime.strptime(date_str, "%A %d/%m/%Y").date()

View File

@@ -75,8 +75,7 @@ class Source:
moved = self.check_date(moved.text, today, yr)
moved_to = self.check_date(moved_to.text, today, yr)
xmas_map[moved] = moved_to
except Exception as e:
print(e)
except Exception:
continue
return xmas_map

View File

@@ -0,0 +1,48 @@
import requests
from bs4 import BeautifulSoup
from dateutil import parser
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "East Ayrshire Council"
DESCRIPTION = "Source for east-ayrshire.gov.uk services for East Ayrshire"
URL = "https://www.east-ayrshire.gov.uk/"
API_URL = "https://www.east-ayrshire.gov.uk/Housing/RubbishAndRecycling/Collection-days/ViewYourRecyclingCalendar.aspx?r="
TEST_CASES = {
"Test_001": {"uprn": "127071649"},
"Test_002": {"uprn": 127072649},
"Test_003": {"uprn": 127072016},
}
ICON_MAP = {
"General waste bin": "mdi:trash-can",
"Garden waste bin": "mdi:leaf",
"Recycling trolley": "mdi:recycle",
}
class Source:
def __init__(self, uprn):
self._uprn = str(uprn)
def fetch(self):
session = requests.Session()
return self.__get_bin_collection_info_page(session, self._uprn)
def __get_bin_collection_info_page(self, session, uprn):
r = session.get(API_URL + uprn)
r.raise_for_status()
soup = BeautifulSoup(r.text, "html.parser")
bin_list = soup.find_all("time")
entries = []
for bins in bin_list:
entries.append(
Collection(
date=parser.parse(bins["datetime"]).date(),
t=bins.select_one("span.ScheduleItem").get_text().strip(),
icon=ICON_MAP.get(
bins.select_one("span.ScheduleItem").get_text().strip()
),
)
)
return entries

View File

@@ -35,7 +35,6 @@ DAYS = {
class Source:
def __init__(self, uprn: str):
self._uprn: str = uprn
print(self._uprn)
def fetch(self):
r = requests.get(API_URL.format(uprn=self._uprn))

View File

@@ -45,7 +45,7 @@ class Source:
entries = []
if PARAMS_NUMBER_PARAM_NAME not in calendar_data:
raise Exception(f"Error: parameter number not present in the url!")
raise Exception("Error: parameter number not present in the url!")
for i in range(1, int(calendar_data[PARAMS_NUMBER_PARAM_NAME]) + 1):
date_str = calendar_data[DATE_PARAM_FORMAT.format(i)]

View File

@@ -6,8 +6,14 @@ from bs4 import BeautifulSoup
from dateutil import parser
from waste_collection_schedule import Collection
# With verify=True the POST fails due to a SSLCertVerificationError.
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "FCC Environment"
DESCRIPTION = """
Consolidated source for waste collection services for ~60 local authorities.

View File

@@ -45,8 +45,8 @@ class Source:
cols = row.find_all("div")
cols = list(map(lambda x: x.text.strip(), cols))
if len(cols) == 0 or not re.match(r"\d{2}/\d{2}/\d{4}", cols[0]):
print("Skipping row", row.find("div"))
continue
date_str = cols[0]
date = datetime.strptime(date_str, "%d/%m/%Y").date()

View File

@@ -13,7 +13,7 @@ TEST_CASES = {
"test 2 - flat": {"uprn": "906700335412"},
}
API_URL = "https://www.glasgow.gov.uk/forms/refuseandrecyclingcalendar/CollectionsCalendar.aspx?UPRN="
API_URL = "https://onlineservices.glasgow.gov.uk/forms/RefuseAndRecyclingWebApplication/CollectionsCalendar.aspx?UPRN="
ICON_MAP = {
"purple bins": "mdi:glass-fragile",
"brown bins": "mdi:apple",

View File

@@ -77,7 +77,6 @@ class Source:
entries.append(Collection(dateStr.date(), name, ICON_MAP.get(name.upper())))
return entries
def fetch(self):
# check address values are not abbreviated
address = self._street

View File

@@ -64,7 +64,7 @@ class Source:
# 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie
s = requests.Session()
q = requote_uri(str(API_URLS["session"]))
r0 = s.get(q, headers = HEADERS)
s.get(q, headers = HEADERS)
# Do initial address search
address = "{} {} {} {}".format(self.street_number, self.street_name, self.town, self.post_code)

View File

@@ -30,10 +30,6 @@ class Source:
)
# extract data from json
data = json.loads(r.text)
entries = []
collections = r.json()["collections"]
entries = []

View File

@@ -181,7 +181,6 @@ class Source:
return self.fetch_file(self._file)
def fetch_url(self, url, params=None):
print(url)
# get ics file
if self._method == "GET":
r = requests.get(
@@ -195,7 +194,7 @@ class Source:
raise RuntimeError(
"Error: unknown method to fetch URL, use GET or POST; got {self._method}"
)
print(r.text)
r.raise_for_status()
if r.apparent_encoding == "UTF-8-SIG":

View File

@@ -1,6 +1,5 @@
import json
import requests
import urllib
from datetime import datetime

View File

@@ -9,8 +9,9 @@ from waste_collection_schedule.service.ICS import ICS
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "City of Karlsruhe"
DESCRIPTION = "Source for City of Karlsruhe."

View File

@@ -0,0 +1,110 @@
import logging
from datetime import datetime
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Kiertokapula Finland"
DESCRIPTION = "Schedule for kiertokapula FI"
URL = "https://www.kiertokapula.fi"
TEST_CASES = {
"Test1": {
"bill_number": "!secret kiertonkapula_fi_bill_number",
"password": "!secret kiertonkapula_fi_bill_password",
}
}
ICON_MAP = {
"SEK": "mdi:trash-can",
"MUO": "mdi:delete-variant",
"KAR": "mdi:package-variant",
"LAS": "mdi:glass-wine",
"MET": "mdi:tools",
"BIO": "mdi:leaf",
}
NAME_DEF = {
"SEK": "Sekajäte",
"MUO": "Muovi",
"KAR": "Kartonki",
"LAS": "Lasi",
"MET": "Metalli",
"BIO": "Bio",
}
API_URL = "https://asiakasnetti.kiertokapula.fi/kiertokapula"
_LOGGER = logging.getLogger(__name__)
class Source:
def __init__(
self,
bill_number,
password,
):
self._bill_number = bill_number
self._password = password
def fetch(self):
session = requests.Session()
session.headers.update({"X-Requested-With": "XMLHttpRequest"})
session.get(API_URL)
# sign in
r = session.post(
API_URL + "/j_acegi_security_check?target=2",
data={
"j_username": self._bill_number,
"j_password": self._password,
"remember-me": "false",
},
)
r.raise_for_status()
# get customer info
r = session.get(API_URL + "/secure/get_customer_datas.do")
r.raise_for_status()
data = r.json()
entries = []
for estate in data.values():
for customer in estate:
r = session.get(
API_URL + "/secure/get_services_by_customer_numbers.do",
params={"customerNumbers[]": customer["asiakasnro"]},
)
r.raise_for_status()
data = r.json()
for service in data:
if service["tariff"].get("productgroup", "PER") == "PER":
continue
next_date_str = None
if (
"ASTSeurTyhj" in service
and service["ASTSeurTyhj"] is not None
and len(service["ASTSeurTyhj"]) > 0
):
next_date_str = service["ASTSeurTyhj"]
elif (
"ASTNextDate" in service
and service["ASTNextDate"] is not None
and len(service["ASTNextDate"]) > 0
):
next_date_str = service["ASTNextDate"]
if next_date_str is None:
continue
next_date = datetime.strptime(next_date_str, "%Y-%m-%d").date()
entries.append(
Collection(
date=next_date,
t=service.get(
"ASTNimi",
NAME_DEF.get(service["tariff"]["productgroup"], "N/A"),
),
icon=ICON_MAP.get(service["tariff"]["productgroup"]),
)
)
return entries

View File

@@ -66,7 +66,7 @@ class Source:
# 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie
s = requests.Session()
q = requote_uri(str(API_URLS["session"]))
r0 = s.get(q, headers = HEADERS)
s.get(q, headers = HEADERS)
# Do initial address search
address = "{} {} {} {}".format(self.street_number, self.street_name, self.suburb, self.post_code)

View File

@@ -84,7 +84,7 @@ class Source:
# 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie
s = requests.Session()
q = requote_uri(str(API_URLS["session"]))
r0 = s.get(q, headers = HEADERS)
s.get(q, headers = HEADERS)
# Do initial address search
address = "{} {}, {} NSW {}".format(self.street_number, self.street_name, self.suburb, self.post_code)

View File

@@ -34,8 +34,6 @@ class Source:
self._district = district
def fetch(self):
now = datetime.datetime.now().date()
r = requests.get(API_URL, params={
"stadt": self._city,
"ortsteil": self._district

View File

@@ -1,4 +1,3 @@
import datetime
import json
import requests

View File

@@ -1,5 +1,4 @@
import datetime
import logging
import re
import requests
@@ -21,31 +20,30 @@ API_URLS = {
"collection": "https://lewisham.gov.uk/api/roundsinformation",
}
URPN_DATA_ITEM = '{79b58e9a-0997-4f18-bb97-637fac570dd1}'
UPRN_DATA_ITEM = '{79b58e9a-0997-4f18-bb97-637fac570dd1}'
REGEX = "<strong>(?P<type>Food and garden waste|Recycling|Refuse).*?</strong>.*?>(?P<frequency>.*?)<.*?\\\\t(?P<weekday>[A-Za-z]*day).*?(?:<br><br>|[A-Za-z\\\\]*?(?P<next>\d{2}/\d{2}/\d{4}))"
DAYS = ["MONDAY","TUESDAY","WEDNESDAY","THURSDAY","FRIDAY","SATURDAY","SUNDAY"]
DAYS = ["MONDAY", "TUESDAY", "WEDNESDAY", "THURSDAY", "FRIDAY", "SATURDAY", "SUNDAY"]
BINS = {
"Refuse": {
"icon": "mdi:trash-can",
"alias": "Black Refuse"
},
},
"Recycling": {
"icon": "mdi:recycle",
"alias": "Green Recycling"
},
},
"Food": {
"icon": "mdi:food-apple",
"alias": "Grey Food"
},
},
"Garden": {
"icon": "mdi:leaf",
"alias": "Brown Garden"
}
}
}
#_LOGGER = logging.getLogger(__name__)
class Source:
def __init__(self, post_code=None, number=None, name=None, uprn=None):
@@ -55,9 +53,8 @@ class Source:
self._uprn = uprn
def fetch(self):
now = datetime.date.today()
if not self._uprn:
# look up the UPRN for the address
p = {'postcodeOrStreet': self._post_code}
r = requests.post(API_URLS["address_search"], params=p)
@@ -77,7 +74,7 @@ class Source:
raise Exception(f"Could not find address {self._post_code} {self._number}{self._name}")
p = {
'item': URPN_DATA_ITEM,
'item': UPRN_DATA_ITEM,
'uprn': self._uprn
}
r = requests.post(API_URLS["collection"], params=p)
@@ -91,18 +88,18 @@ class Source:
for collection in collections:
if collection[0].__contains__(' and '):
collections.append([collection[0].split(" and ",2)[0].title().replace(" Waste",""), collection[1], collection[2], collection[3]])
collections.append([collection[0].split(" and ",2)[1].title().replace(" Waste",""), collection[1], collection[2], collection[3]])
collections.append([collection[0].split(" and ", 2)[0].title().replace(" Waste", ""), collection[1], collection[2], collection[3]])
collections.append([collection[0].split(" and ", 2)[1].title().replace(" Waste", ""), collection[1], collection[2], collection[3]])
else:
if collection[3] != "":
nextDate = datetime.datetime.strptime(collection[3], "%d/%m/%Y").date()
next_date = datetime.datetime.strptime(collection[3], "%d/%m/%Y").date()
elif collection[1] == "WEEKLY":
d = datetime.date.today();
nextDate = d + datetime.timedelta((DAYS.index(collection[2].upper())+1 - d.isoweekday()) % 7)
d = datetime.date.today()
next_date = d + datetime.timedelta((DAYS.index(collection[2].upper())+1 - d.isoweekday()) % 7)
entries.append(
Collection(
date=nextDate,
date=next_date,
t=BINS.get(collection[0])['alias'],
icon=BINS.get(collection[0])['icon']
)

View File

@@ -1,5 +1,3 @@
from datetime import datetime
import re
import requests
from bs4 import BeautifulSoup

View File

@@ -1,4 +1,3 @@
import datetime
import requests
from waste_collection_schedule import Collection
from waste_collection_schedule.service.ICS import ICS

View File

@@ -1,4 +1,3 @@
import json
import logging
from datetime import datetime

View File

@@ -1,8 +1,8 @@
import json
import requests
from datetime import datetime
from time import time_ns
import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
# Many thanks to dt215git for their work on the Bexley version of this provider which helped me write this.
@@ -11,83 +11,108 @@ TITLE = "Maidstone Borough Council"
DESCRIPTION = "Source for maidstone.gov.uk services for Maidstone Borough Council."
URL = "https://maidstone.gov.uk"
TEST_CASES = {
"Test_001": {"uprn": "10022892379"}, # has mutliple collections on same week per bin type
"Test_002": {"uprn": 10014307164}, # has duplicates of the same collection (two bins for this block of flats?)
"Test_003": {"uprn": "200003674881"} # has garden waste collection, at time of coding
"Test_001": {
"uprn": "10022892379"
}, # has multiple collections on same week per bin type
"Test_002": {
"uprn": 10014307164
}, # has duplicates of the same collection (two bins for this block of flats?)
"Test_003": {
"uprn": "200003674881"
}, # has garden waste collection, at time of coding
}
HEADERS = {
"user-agent": "Mozilla/5.0",
}
# map names and icons, maidstone group food recycling for both
BIN_MAP = {
"REFUSE": {"icon":"mdi:trash-can", "name": "Black bin and food"},
"RECYCLING": {"icon":"mdi:recycle", "name": "Recycling bin and food"},
"GARDEN": {"icon":"mdi:leaf", "name": "Garden bin"}
ICON_MAP = {
"clinical": "mdi:medical-bag",
"bulky": "mdi:sofa",
"residual": "mdi:trash-can",
"recycling": "mdi:recycle",
"garden": "mdi:leaf",
"food": "mdi:food",
}
class Source:
def __init__(self, uprn):
#self._uprn = str(uprn).zfill(12)
self._uprn = str(uprn)
# self._uprn = str(uprn).zfill(12)
self._uprn = str(uprn).strip()
def fetch(self):
s = requests.Session()
# Set up session
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
session_request = s.get(
f"https://self.maidstone.gov.uk/apibroker/domain/self.maidstone.gov.uk?_={timestamp}",
s.get(
f"https://my.maidstone.gov.uk/apibroker/domain/my.maidstone.gov.uk?_={timestamp}&sid=979631f89458fc974cc2aa69ebbd7996",
headers=HEADERS,
)
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
# This request gets the session ID
sid_request = s.get(
"https://self.maidstone.gov.uk/authapi/isauthenticated?uri=https%3A%2F%2Fself.maidstone.gov.uk%2Fservice%2Fcheck_your_bin_day&hostname=self.maidstone.gov.uk&withCredentials=true",
headers=HEADERS
"https://my.maidstone.gov.uk/authapi/isauthenticated?uri=https%3A%2F%2Fmy.maidstone.gov.uk%2Fservice%2FFind-your-bin-day&hostname=my.maidstone.gov.uk&withCredentials=true",
headers=HEADERS,
)
sid_data = sid_request.json()
sid = sid_data['auth-session']
sid = sid_data["auth-session"]
# This request retrieves the schedule
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
payload = {
"formValues": { "Your collections": {"address": {"value" : self._uprn}, "uprn": {"value": self._uprn}}}
"formValues": {
"Lookup": {
"AddressData": {"value": self._uprn},
"AddressUPRN": {"value": self._uprn},
}
}
}
entries = []
schedule_request = s.post(
f"https://my.maidstone.gov.uk/apibroker/runLookup?id=654b7b6478deb&repeat_against=&noRetry=true&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&_={timestamp}&sid={sid}",
headers=HEADERS,
json=payload,
)
rowdata = json.loads(schedule_request.content)["integration"]["transformed"][
"rows_data"
][self._uprn]
collections: dict[str, dict[str, list[datetime.date] | str]] = {}
# Extract bin types and next collection dates, for some reason unlike all others that use this service, you need to submit a bin type to get useful dates.
for bin in BIN_MAP.keys():
# set seen dates
seen = []
for key, value in rowdata.items():
if (
"NextCollectionDateMM" in key or "LastCollectionOriginalDateMM" in key
) and value != "":
collection_key = key.split("_")[0]
if collection_key not in collections:
collections[collection_key] = {"dates": []}
collections[collection_key]["dates"].append(
datetime.strptime(value, "%d/%m/%Y").date()
)
if "_Description" in key:
collection_key = key.split("_")[0]
if collection_key not in collections:
collections[collection_key] = {"dates": []}
collections[collection_key]["description"] = value
# create payload for bin type
payload = {
"formValues": { "Your collections": {"bin": {"value": bin}, "address": {"value" : self._uprn}, "uprn": {"value": self._uprn}}}
}
schedule_request = s.post(
f"https://self.maidstone.gov.uk/apibroker/runLookup?id=5c18dbdcb12cf&repeat_against=&noRetry=false&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&_={timestamp}&sid={sid}",
headers=HEADERS,
json=payload
for key, collection in collections.items():
bin = collection.get("description") or key
icon = ICON_MAP.get(
bin.lower()
.replace("domestic ", "")
.replace("communal ", "")
.replace("waste", "")
.strip()
)
rowdata = json.loads(schedule_request.content)['integration']['transformed']['rows_data']
for item in rowdata:
collectionDate = rowdata[item]["Date"]
# need to dedupe as MBC seem to list the same collection twice for some places
if collectionDate not in seen:
entries.append(
Collection(
t=BIN_MAP[bin]['name'],
date=datetime.strptime(
collectionDate, "%d/%m/%Y"
).date(),
icon=BIN_MAP.get(bin).get('icon'),
)
for collectionDate in set(collection["dates"]):
entries.append(
Collection(
t=bin,
date=collectionDate,
icon=icon,
)
# add this date to seen so we don't use it again
seen.append(collectionDate)
return entries
)
return entries

View File

@@ -3,17 +3,17 @@ from datetime import datetime
import requests
from bs4 import BeautifulSoup
from waste_collection_schedule import Collection
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Maldon District Council"
DESCRIPTION = ("Source for www.maldon.gov.uk services for Maldon, UK")
DESCRIPTION = "Source for www.maldon.gov.uk services for Maldon, UK"
URL = "https://www.maldon.gov.uk/"
TEST_CASES = {
"test 1": {"uprn": "200000917928"},
"test 2": {"uprn": "100091258454"},
"test 2": {"uprn": 100091258454},
}
API_URL = "https://maldon.suez.co.uk/maldon/ServiceSummary?uprn="
@@ -25,15 +25,15 @@ ICON_MAP = {
"Food": "mdi:food-apple",
}
class Source:
def __init__(self, uprn: str):
self._uprn = uprn
def _extract_future_date(self, text):
def _extract_dates(self, text):
# parse both dates and return the future one
dates = re.findall(r'\d{2}/\d{2}/\d{4}', text)
dates = [datetime.strptime(date, '%d/%m/%Y').date() for date in dates]
return max(dates)
dates = re.findall(r"\d{2}/\d{2}/\d{4}", text)
return [datetime.strptime(date, "%d/%m/%Y").date() for date in dates]
def fetch(self):
entries = []
@@ -51,15 +51,19 @@ class Source:
# check is a collection row
title = collection.find("h2", {"class": "panel-title"}).text.strip()
if title == "Other Services" or "You are not currently subscribed" in collection.text:
if (
title == "Other Services"
or "You are not currently subscribed" in collection.text
):
continue
entries.append(
Collection(
date=self._extract_future_date(collection.text),
t=title,
icon=ICON_MAP.get(title),
for date in self._extract_dates(collection.text):
entries.append(
Collection(
date=date,
t=title,
icon=ICON_MAP.get(title),
)
)
)
return entries

View File

@@ -69,10 +69,13 @@ class Source:
for article in soup.find_all("article"):
waste_type = article.h3.string
icon = ICON_MAP.get(waste_type)
next_pickup = article.find(class_="next-service").string.strip()
if re.match(r"[^\s]* \d{1,2}\/\d{1,2}\/\d{4}", next_pickup):
next_pickup = article.find(class_="next-service").string
if next_pickup is None:
continue
date_match = re.search(r"\d{1,2}\/\d{1,2}\/\d{4}", next_pickup)
if date_match:
next_pickup_date = datetime.strptime(
next_pickup.split(sep=" ")[1], "%d/%m/%Y"
date_match.group(0), "%d/%m/%Y"
).date()
entries.append(
Collection(date=next_pickup_date, t=waste_type, icon=icon)

View File

@@ -61,7 +61,7 @@ class Source:
# response is in HTML - parse it
soup = BeautifulSoup(body, "html.parser")
_LOGGER.debug(f"Parsed mojiodpadki.si response")
_LOGGER.debug("Parsed mojiodpadki.si response")
# find years, months, dates and waste tags in all document tables
year = datetime.date.today().year

View File

@@ -1,4 +1,3 @@
import json
from datetime import datetime
import requests

View File

@@ -0,0 +1,85 @@
import requests
from dateutil import parser
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "North Ayrshire Council"
DESCRIPTION = "Source for north-ayrshire.gov.uk services for North Ayrshire"
URL = "https://www.north-ayrshire.gov.uk/"
API_URL = "https://www.maps.north-ayrshire.gov.uk/arcgis/rest/services/AGOL/YourLocationLive/MapServer/8/query?f=json&outFields=*&returnDistinctValues=true&returnGeometry=false&spatialRel=esriSpatialRelIntersects&where=UPRN%20%3D%20%27{0}%27"
TEST_CASES = {
"Test_001": {"uprn": "126043248"},
"Test_002": {"uprn": 126021147},
"Test_003": {"uprn": 126091148},
}
ICON_MAP = {
"Grey": "mdi:trash-can",
"Brown": "mdi:leaf",
"Purple": "mdi:glass-fragile",
"Blue": "mdi:recycle",
}
class Source:
def __init__(self, uprn):
self._uprn = str(uprn)
def fetch(self):
return self.__get_bin_collection_info_json(self._uprn)
def __get_bin_collection_info_json(self, uprn):
r = requests.get(API_URL.format(uprn))
bin_json = r.json()["features"]
bin_list = []
if "BLUE_DATE_TEXT" in bin_json[0]["attributes"]:
bin_list.append(
[
"Blue",
"/".join(
reversed(bin_json[0]["attributes"]["BLUE_DATE_TEXT"].split("/"))
),
]
)
if "GREY_DATE_TEXT" in bin_json[0]["attributes"]:
bin_list.append(
[
"Grey",
"/".join(
reversed(bin_json[0]["attributes"]["GREY_DATE_TEXT"].split("/"))
),
]
)
if "PURPLE_DATE_TEXT" in bin_json[0]["attributes"]:
bin_list.append(
[
"Purple",
"/".join(
reversed(
bin_json[0]["attributes"]["PURPLE_DATE_TEXT"].split("/")
)
),
]
)
if "BROWN_DATE_TEXT" in bin_json[0]["attributes"]:
bin_list.append(
[
"Brown",
"/".join(
reversed(
bin_json[0]["attributes"]["BROWN_DATE_TEXT"].split("/")
)
),
]
)
entries = []
for bins in bin_list:
entries.append(
Collection(
date=parser.parse(bins[1]).date(),
t=bins[0],
icon=ICON_MAP.get(bins[0]),
)
)
return entries

View File

@@ -56,17 +56,16 @@ class Source:
date = date_li.text
try:
date = datetime.strptime(date.split(",")[1].strip(), "%d %B %Y").date()
except:
print("No date")
except Exception:
continue
entries.append(
Collection(
date=date,
t=bin_name,
icon=icon,
)
)
Collection(
date=date,
t=bin_name,
icon=icon,
)
)
return entries

View File

@@ -52,7 +52,6 @@ class Source:
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64)",
}
requests.packages.urllib3.disable_warnings()
# Get variables for workings
response = requests.get(

View File

@@ -1,11 +1,8 @@
import requests
import urllib.parse
import json
import datetime
import re
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from pprint import pprint
TITLE = "Oslo Kommune"
DESCRIPTION = "Oslo Kommune (Norway)."

View File

@@ -1,4 +1,3 @@
import logging
import datetime
import requests

View File

@@ -11,8 +11,9 @@ from waste_collection_schedule import Collection # type: ignore[attr-defined]
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# These two lines areused to suppress the InsecureRequestWarning when using verify=False
urllib3.disable_warnings()
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Port Adelaide Enfield, South Australia"
DESCRIPTION = "Source for City of Port Adelaide Enfield, South Australia."

View File

@@ -2,7 +2,6 @@ import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from datetime import datetime, timedelta
from dateutil import rrule
import json
TITLE = "Potsdam"

View File

@@ -1,4 +1,4 @@
from datetime import date, datetime
from datetime import datetime
from typing import List
import requests

View File

@@ -13,8 +13,8 @@ URL = "https://reigate-banstead.gov.uk"
TEST_CASES = {
"Test_001": {"uprn": 68110755},
"Test_002": {"uprn": "000068110755"},
"Test_003": {"uprn": "68101147"}, #commercial refuse collection
"Test_004": {"uprn": "000068101147"}, #commercial refuse collection
"Test_003": {"uprn": "68101147"}, # commercial refuse collection
"Test_004": {"uprn": "000068101147"}, # commercial refuse collection
}
HEADERS = {
"user-agent": "Mozilla/5.0",
@@ -22,16 +22,17 @@ HEADERS = {
ICON_MAP = {
"FOOD WASTE": "mdi:food",
"MIXED RECYCLING": "mdi:recycle",
"GLASS": "mdi:recycle", #commercial
"MIXED CANS": "mdi:recycle", #commercial
"PLASTIC": "mdi:recycle", #commercial
"GLASS": "mdi:recycle", # commercial
"MIXED CANS": "mdi:recycle", # commercial
"PLASTIC": "mdi:recycle", # commercial
"PAPER AND CARDBOARD": "mdi:newspaper",
"TRADE - PAPER AND CARDBOARD": "mdi:newspaper", #commercial
"TRADE - PAPER AND CARDBOARD": "mdi:newspaper", # commercial
"REFUSE": "mdi:trash-can",
"TRADE - REFUSE": "mdi:trash-can", #commercial
"TRADE - REFUSE": "mdi:trash-can", # commercial
"GARDEN WASTE": "mdi:leaf",
}
class Source:
def __init__(self, uprn):
self._uprn = str(uprn)
@@ -40,13 +41,6 @@ class Source:
s = requests.Session()
# Set up session
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
session_request = s.get(
f"https://my.reigate-banstead.gov.uk/apibroker/domain/my.reigate-banstead.gov.uk?_={timestamp}",
headers=HEADERS,
)
# This request gets the session ID
sid_request = s.get(
"https://my.reigate-banstead.gov.uk/authapi/isauthenticated?uri=https%3A%2F%2Fmy.reigate-banstead.gov.uk%2Fservice%2FBins_and_recycling___collections_calendar&hostname=my.reigate-banstead.gov.uk&withCredentials=true",
@@ -67,18 +61,28 @@ class Source:
# This request retrieves the schedule
timestamp = time_ns() // 1_000_000 # epoch time in milliseconds
min_date = datetime.today().strftime("%Y-%m-%d") #today
max_date = datetime.today() + timedelta(days=28) # max of 28 days ahead
min_date = datetime.today().strftime("%Y-%m-%d") # today
max_date = datetime.today() + timedelta(days=28) # max of 28 days ahead
max_date = max_date.strftime("%Y-%m-%d")
payload = {
"formValues": { "Section 1": {"uprnPWB": {"value": self._uprn},
"minDate": {"value": min_date},
"maxDate": {"value": max_date},
"tokenString": {"value": token_string},
}
}
}
"formValues": {
"Section 1": {
"uprnPWB": {
"value": self._uprn
},
"minDate": {
"value": min_date
},
"maxDate": {
"value": max_date
},
"tokenString": {
"value": token_string
},
}
}
}
schedule_request = s.post(
f"https://my.reigate-banstead.gov.uk/apibroker/runLookup?id=609d41ca89251&repeat_against=&noRetry=true&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&_={timestamp}&sid={sid}",
@@ -94,20 +98,20 @@ class Source:
bindata = rowdata.findAll("ul")
# Extract bin types and next collection dates
x=0
x = 0
entries = []
for item in bindata:
bin_date = datedata[x].text.strip()
x=x+1
x = x+1
bins = item.findAll('span')
for bin in bins:
bin_type=bin.text.strip()
for bin_name in bins:
bin_type = bin_name.text.strip()
entries.append(
Collection(
t=bin_type,
date=datetime.strptime(bin_date, "%A %d %B %Y").date(),
icon=ICON_MAP.get(bin.text.upper())
icon=ICON_MAP.get(bin_name.text.upper())
)
)
return entries
return entries

View File

@@ -0,0 +1,98 @@
from urllib.parse import parse_qs, urlparse
import requests
from bs4 import BeautifulSoup
from dateutil import parser
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Renfrewshire Council"
DESCRIPTION = "Source for renfrewshire.gov.uk services for Renfrewshire"
URL = "https://renfrewshire.gov.uk/"
API_URL = "https://www.renfrewshire.gov.uk/article/2320/Check-your-bin-collection-day"
TEST_CASES = {
"Test_001": {"postcode": "PA12 4JU", "uprn": 123033059},
"Test_002": {"postcode": "PA12 4AJ", "uprn": "123034174"},
"Test_003": {"postcode": "PA12 4EW", "uprn": "123033042"},
}
ICON_MAP = {
"Grey": "mdi:trash-can",
"Brown": "mdi:leaf",
"Green": "mdi:glass-fragile",
"Blue": "mdi:note",
}
class Source:
def __init__(self, postcode, uprn):
self._postcode = postcode
self._uprn = str(uprn)
def fetch(self):
session = requests.Session()
bin_collection_info_page = self.__get_bin_collection_info_page(
session, self._uprn, self._postcode
)
return self.__get_bin_collection_info(bin_collection_info_page)
def __get_goss_form_ids(self, url):
parsed_form_url = urlparse(url)
form_url_values = parse_qs(parsed_form_url.query)
return {
"page_session_id": form_url_values["pageSessionId"][0],
"session_id": form_url_values["fsid"][0],
"nonce": form_url_values["fsn"][0],
}
def __get_bin_collection_info_page(self, session, uprn, postcode):
r = session.get(API_URL)
r.raise_for_status()
soup = BeautifulSoup(r.text, "html.parser")
form = soup.find(id="RENFREWSHIREBINCOLLECTIONS_FORM")
goss_ids = self.__get_goss_form_ids(form["action"])
r = session.post(
form["action"],
data={
"RENFREWSHIREBINCOLLECTIONS_PAGESESSIONID": goss_ids["page_session_id"],
"RENFREWSHIREBINCOLLECTIONS_SESSIONID": goss_ids["session_id"],
"RENFREWSHIREBINCOLLECTIONS_NONCE": goss_ids["nonce"],
"RENFREWSHIREBINCOLLECTIONS_VARIABLES": "",
"RENFREWSHIREBINCOLLECTIONS_PAGENAME": "PAGE1",
"RENFREWSHIREBINCOLLECTIONS_PAGEINSTANCE": "0",
"RENFREWSHIREBINCOLLECTIONS_PAGE1_ADDRESSSTRING": "",
"RENFREWSHIREBINCOLLECTIONS_PAGE1_UPRN": uprn,
"RENFREWSHIREBINCOLLECTIONS_PAGE1_ADDRESSLOOKUPPOSTCODE": postcode,
"RENFREWSHIREBINCOLLECTIONS_PAGE1_NAVBUTTONS_NEXT": "Load Address",
},
)
r.raise_for_status()
return r.text
def __get_bin_collection_info(self, binformation):
soup = BeautifulSoup(binformation, "html.parser")
all_collections = soup.select(
"#RENFREWSHIREBINCOLLECTIONS_PAGE1_COLLECTIONDETAILS"
)
for collection in all_collections:
dates = collection.select("p.collection__date")
date_list = []
bin_list = []
for individualdate in dates:
date_list.append(parser.parse(individualdate.get_text()).date())
bins = collection.select("p.bins__name")
for individualbin in bins:
bin_list.append(individualbin.get_text().strip())
schedule = list(zip(date_list, bin_list))
entries = []
for sched_entry in schedule:
entries.append(
Collection(
date=sched_entry[0],
t=sched_entry[1],
icon=ICON_MAP.get(sched_entry[1]),
)
)
return entries

View File

@@ -1,13 +1,17 @@
import json
from datetime import datetime
import requests
# Suppress error messages relating to SSLCertVerificationError
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
from datetime import datetime
from waste_collection_schedule import Collection # type: ignore[attr-defined]
# With verify=True the POST fails due to a SSLCertVerificationError.
# Using verify=False works, but is not ideal. The following links may provide a better way of dealing with this:
# https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings
# https://urllib3.readthedocs.io/en/1.26.x/user-guide.html#ssl
# This line suppresses the InsecureRequestWarning when using verify=False
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
TITLE = "Stevenage Borough Council"
DESCRIPTION = "Source for stevenage.gov.uk services for Stevenage, UK."

View File

@@ -14,7 +14,7 @@ TEST_CASES = {
"Deprecated postcode No Spaces": {"postcode": "GL205TT"},
}
DEPRICATED_API_URL = "https://api-2.tewkesbury.gov.uk/general/rounds/%s/nextCollection"
DEPRECATED_API_URL = "https://api-2.tewkesbury.gov.uk/general/rounds/%s/nextCollection"
API_URL = "https://api-2.tewkesbury.gov.uk/incab/rounds/%s/next-collection"
ICON_MAP = {
@@ -29,23 +29,23 @@ LOGGER = logging.getLogger(__name__)
class Source:
def __init__(self, postcode: str | None = None, uprn: str | None = None):
self.urpn = str(uprn) if uprn is not None else None
self.uprn = str(uprn) if uprn is not None else None
self.postcode = str(postcode) if postcode is not None else None
def fetch(self):
if self.urpn is None:
if self.uprn is None:
LOGGER.warning(
"Using deprecated API might not work in the future. Please provide a UPRN."
)
return self.get_data(self.postcode, DEPRICATED_API_URL)
return self.get_data(self.urpn)
return self.get_data(self.postcode, DEPRECATED_API_URL)
return self.get_data(self.uprn)
def get_data(self, uprn, api_url=API_URL):
if uprn is None:
raise Exception("UPRN not set")
encoded_urpn = urlquote(uprn)
request_url = api_url % encoded_urpn
encoded_uprn = urlquote(uprn)
request_url = api_url % encoded_uprn
response = requests.get(request_url)
response.raise_for_status()

View File

@@ -14,7 +14,7 @@ TEST_CASES = {
"street": "Amanda Place",
"houseNo": 10,
},
"Annangrove, Amanda Place 10": {
"Annangrove, Amanda Place 10 (2)": {
"suburb": "ANn ANgROvE",
"street": "amanda PlaC e",
"houseNo": " 10 ",
@@ -41,13 +41,13 @@ class Source:
suburbs[entry["Suburb"].strip().upper().replace(" ", "")] = entry["SuburbKey"]
# check if suburb exists
suburb_searh = self._suburb.strip().upper().replace(" ", "")
if suburb_searh not in suburbs:
suburb_search = self._suburb.strip().upper().replace(" ", "")
if suburb_search not in suburbs:
raise Exception(f"suburb not found: {self._suburb}")
suburbKey = suburbs[suburb_searh]
suburb_key = suburbs[suburb_search]
# get list of streets for selected suburb
r = requests.get(f"{self._url}/streets/{suburbKey}")
r = requests.get(f"{self._url}/streets/{suburb_key}")
data = json.loads(r.text)
streets = {}
@@ -58,30 +58,30 @@ class Source:
street_search = self._street.strip().upper().replace(" ", "")
if street_search not in streets:
raise Exception(f"street not found: {self._street}")
streetKey = streets[street_search]
street_key = streets[street_search]
# get list of house numbers for selected street
params = {"streetkey": streetKey, "suburbKey": suburbKey}
params = {"streetkey": street_key, "suburbKey": suburb_key}
r = requests.get(
f"{self._url}/properties/GetPropertiesByStreetAndSuburbKey",
params=params,
)
data = json.loads(r.text)
houseNos = {}
house_numbers = {}
for entry in data:
houseNos[
house_numbers[
(str(int(entry["HouseNo"])) + entry.get("HouseSuffix", "").strip()).strip().upper().replace(" ", "")
] = entry["PropertyKey"]
# check if house number exists
houseNo_search = self._houseNo.strip().upper().replace(" ", "")
if houseNo_search not in houseNos:
if houseNo_search not in house_numbers:
raise Exception(f"house number not found: {self._houseNo}")
propertyKey = houseNos[houseNo_search]
property_key = house_numbers[houseNo_search]
# get collection schedule
r = requests.get(f"{self._url}/services/{propertyKey}")
r = requests.get(f"{self._url}/services/{property_key}")
data = json.loads(r.text)
entries = []

View File

@@ -4,7 +4,7 @@ import requests
from waste_collection_schedule import Collection # type: ignore[attr-defined]
from waste_collection_schedule.service.ICS import ICS
TITLE = "Wermelskirchen"
TITLE = "Wermelskirchen (Service Down)"
DESCRIPTION = "Source for Abfallabholung Wermelskirchen, Germany"
URL = "https://www.wermelskirchen.de"
TEST_CASES = {

View File

@@ -3,7 +3,7 @@ import requests
from bs4 import BeautifulSoup
from datetime import datetime
from waste_collection_schedule import Collection
from waste_collection_schedule import Collection # type: ignore[attr-defined]
TITLE = "Borough Council of King's Lynn & West Norfolk"
@@ -24,6 +24,7 @@ ICON_MAP = {
"GARDEN": "mdi:leaf"
}
class Source:
def __init__(self, uprn):
self._uprn = str(uprn).zfill(12)
@@ -32,7 +33,7 @@ class Source:
# Get session and amend cookies
s = requests.Session()
r0 = s.get(
s.get(
"https://www.west-norfolk.gov.uk/info/20174/bins_and_recycling_collection_dates",
headers=HEADERS
)
@@ -44,7 +45,7 @@ class Source:
)
# Get initial collection dates using updated cookies
r1= s.get(
s.get(
"https://www.west-norfolk.gov.uk/info/20174/bins_and_recycling_collection_dates",
headers=HEADERS,
cookies=s.cookies
@@ -70,10 +71,10 @@ class Source:
dt = d.text + " " + month.text
entries.append(
Collection(
date = datetime.strptime(dt, "%d %B %Y").date(),
t = a,
icon = ICON_MAP.get(a.upper())
date=datetime.strptime(dt, "%d %B %Y").date(),
t=a,
icon=ICON_MAP.get(a.upper())
)
)
return entries
return entries

View File

@@ -74,7 +74,7 @@ class Source:
break
if not bezirk_id:
raise Exception(f"bezirk not found")
raise Exception("bezirk not found")
# get ortsteil id
r = session.get(API_URL.format(
@@ -91,7 +91,7 @@ class Source:
last_orts_id = part.split(" = ")[1][1:-1]
if not ortsteil_id:
raise Exception(f"ortsteil not found")
raise Exception("ortsteil not found")
street_id = None
@@ -111,13 +111,13 @@ class Source:
last_street_id = part.split(" = ")[1][1:-1]
if not street_id:
raise Exception(f"street not found")
raise Exception("street not found")
entries = self.get_calendar_data(year, bezirk_id, ortsteil_id, street_id, session)
if datetime.now().month >= 11:
try:
entries += self.get_calendar_data(year+1, bezirk_id, ortsteil_id, street_id, session)
except Exception as e:
except Exception:
pass
return entries

View File

@@ -82,6 +82,11 @@ def customize_function(entry: Collection, customize: Dict[str, Customize]):
return entry
def apply_day_offset(entry: Collection, day_offset: int):
entry.set_date(entry.date + datetime.timedelta(days=day_offset))
return entry
class SourceShell:
def __init__(
self,
@@ -92,6 +97,7 @@ class SourceShell:
url: Optional[str],
calendar_title: Optional[str],
unique_id: str,
day_offset: int,
):
self._source = source
self._customize = customize
@@ -102,6 +108,7 @@ class SourceShell:
self._unique_id = unique_id
self._refreshtime = None
self._entries: List[Collection] = []
self._day_offset = day_offset
@property
def refreshtime(self):
@@ -127,6 +134,10 @@ class SourceShell:
def unique_id(self):
return self._unique_id
@property
def day_offset(self):
return self._day_offset
def fetch(self):
"""Fetch data from source."""
try:
@@ -149,6 +160,10 @@ class SourceShell:
# customize fetched entries
entries = map(lambda x: customize_function(x, self._customize), entries)
# apply day offset
if self._day_offset != 0:
entries = map(lambda x: apply_day_offset(x, self._day_offset), entries)
self._entries = list(entries)
def get_dedicated_calendar_types(self):
@@ -182,6 +197,7 @@ class SourceShell:
customize: Dict[str, Customize],
source_args,
calendar_title: Optional[str] = None,
day_offset: int = 0,
):
# load source module
try:
@@ -204,6 +220,7 @@ class SourceShell:
url=source_module.URL, # type: ignore[attr-defined]
calendar_title=calendar_title,
unique_id=calc_unique_source_id(source_name, source_args),
day_offset=day_offset,
)
return g

68
doc/ics/gedling_gov_uk.md Normal file
View File

@@ -0,0 +1,68 @@
# Gedling Borough Council (unofficial)
Gedling Borough Council (unofficial) is supported by the generic [ICS](/doc/source/ics.md) source. For all available configuration options, please refer to the source description.
## How to get the configuration arguments
- Gedling Borough Council does not provide bin collections in the iCal calendar format directly.
- The iCal calendar files have been generated from the official printed calendars and hosted on GitHub for use.
- Go to the Gedling Borough Council [Refuse Collection Days](https://apps.gedling.gov.uk/refuse/search.aspx) site and enter your street name to find your bin day/garden waste collection schedule. e.g. "Wednesday G2".
- Find the [required collection schedule](https://jamesmacwhite.github.io/gedling-borough-council-bin-calendars/) and use the "Copy to clipboard" button for the URL of the .ics file.
## Examples
### Monday G1 (General bin collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_monday_g1_bin_schedule.ics
```
### Wednesday G2 (General bin collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_wednesday_g2_bin_schedule.ics
```
### Friday G3 (General bin collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_friday_g3_bin_schedule.ics
```
### Monday A (Garden waste collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_monday_a_garden_bin_schedule.ics
```
### Wednesday C (Garden waste collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_wednesday_c_garden_bin_schedule.ics
```
### Friday E (Garden waste collection)
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_friday_e_garden_bin_schedule.ics
```

22
doc/ics/herten_de.md Normal file
View File

@@ -0,0 +1,22 @@
# Herten (durth-roos.de)
Herten (durth-roos.de) is supported by the generic [ICS](/doc/source/ics.md) source. For all available configuration options, please refer to the source description.
## How to get the configuration arguments
- Goto <https://abfallkalender.durth-roos.de/herten/> and select your location.
- Right click copy-url of the `iCalendar` button to get a webcal link. (You can ignore the note below as this source automatically refetches the ics file)
- Replace the `url` in the example configuration with this link.
## Examples
### Ackerstraße 1
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
url: https://abfallkalender.durth-roos.de/herten/icalendar/Ackerstrasse_1.ics
```

View File

@@ -23,6 +23,7 @@ known to work with:
|City of Georgetown, TX|USA|[texasdisposal.com](https://www.texasdisposal.com/waste-wizard/)|
|City of Vancouver|Canada|[vancouver.ca](https://vancouver.ca/home-property-development/garbage-and-recycling-collection-schedules.aspx)|
|City of Nanaimo|Canada|[nanaimo.ca](https://www.nanaimo.ca/city-services/garbage-recycling/collectionschedule)|
|City of Austin|USA|[austintexas.gov](https://www.austintexas.gov/myschedule)|
and probably a lot more.
@@ -96,3 +97,13 @@ waste_collection_schedule:
args:
url: webcal://recollect.a.ssl.fastly.net/api/places/3734BF46-A9A1-11E2-8B00-43B94144C028/services/193/events.en.ics?client_id=8844492C-9457-11EE-90E3-08A383E66757
```
### Cathedral of Junk, Austin, TX
```yaml
waste_collection_schedule:
sources:
- name: ics
args:
split_at: '\, (?:and )?|(?: and )'
url: https://recollect.a.ssl.fastly.net/api/places/2587D9F6-DF59-11E8-96F5-0E2C682931C6/services/323/events.en-US.ics
```

View File

@@ -0,0 +1,20 @@
title: Gedling Borough Council (unofficial)
url: https://github.com/jamesmacwhite/gedling-borough-council-bin-calendars
howto: |
- Gedling Borough Council does not provide bin collections in the iCal calendar format directly.
- The iCal calendar files have been generated from the official printed calendars and hosted on GitHub for use.
- Go to the Gedling Borough Council [Refuse Collection Days](https://apps.gedling.gov.uk/refuse/search.aspx) site and enter your street name to find your bin day/garden waste collection schedule. e.g. "Wednesday G2".
- Find the [required collection schedule](https://jamesmacwhite.github.io/gedling-borough-council-bin-calendars/) and use the "Copy to clipboard" button for the URL of the .ics file.
test_cases:
Monday G1 (General bin collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_monday_g1_bin_schedule.ics"
Wednesday G2 (General bin collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_wednesday_g2_bin_schedule.ics"
Friday G3 (General bin collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_friday_g3_bin_schedule.ics"
Monday A (Garden waste collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_monday_a_garden_bin_schedule.ics"
Wednesday C (Garden waste collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_wednesday_c_garden_bin_schedule.ics"
Friday E (Garden waste collection):
url: "https://raw.githubusercontent.com/jamesmacwhite/gedling-borough-council-bin-calendars/main/ical/gedling_borough_council_friday_e_garden_bin_schedule.ics"

View File

@@ -0,0 +1,9 @@
title: Herten (durth-roos.de)
url: https://herten.de
howto: |
- Goto <https://abfallkalender.durth-roos.de/herten/> and select your location.
- Right click copy-url of the `iCalendar` button to get a webcal link. (You can ignore the note below as this source automatically refetches the ics file)
- Replace the `url` in the example configuration with this link.
test_cases:
Ackerstraße 1:
url: "https://abfallkalender.durth-roos.de/herten/icalendar/Ackerstrasse_1.ics"

View File

@@ -44,6 +44,9 @@ extra_info:
- title: City of Nanaimo
url: https://www.nanaimo.ca
country: ca
- title: City of Austin, TX
url: https://austintexas.gov
country: us
howto: |
- To get the URL, search your address in the recollect form of your home town.
- Click "Get a calendar", then "Add to Google Calendar".
@@ -63,6 +66,7 @@ howto: |
|City of Georgetown, TX|USA|[texasdisposal.com](https://www.texasdisposal.com/waste-wizard/)|
|City of Vancouver|Canada|[vancouver.ca](https://vancouver.ca/home-property-development/garbage-and-recycling-collection-schedules.aspx)|
|City of Nanaimo|Canada|[nanaimo.ca](https://www.nanaimo.ca/city-services/garbage-recycling/collectionschedule)|
|City of Austin|USA|[austintexas.gov](https://www.austintexas.gov/myschedule)|
and probably a lot more.
test_cases:
@@ -85,3 +89,6 @@ test_cases:
split_at: "\\, (?:and )?|(?: and )"
166 W 47th Ave, Vancouver:
url: "webcal://recollect.a.ssl.fastly.net/api/places/3734BF46-A9A1-11E2-8B00-43B94144C028/services/193/events.en.ics?client_id=8844492C-9457-11EE-90E3-08A383E66757"
Cathedral of Junk, Austin, TX:
url: https://recollect.a.ssl.fastly.net/api/places/2587D9F6-DF59-11E8-96F5-0E2C682931C6/services/323/events.en-US.ics
split_at: "\\, (?:and )?|(?: and )"

View File

@@ -64,6 +64,7 @@ waste_collection_schedule:
picture: PICTURE
use_dedicated_calendar: USE_DEDICATED_CALENDAR
dedicated_calendar_title: DEDICATED_CALENDAR_TITLE
day_offset: DAY_OFFSET
calendar_title: CALENDAR_TITLE
fetch_time: FETCH_TIME
random_fetch_time_offset: RANDOM_FETCH_TIME_OFFSET
@@ -78,6 +79,7 @@ waste_collection_schedule:
| random_fetch_time_offset | int | optional | randomly offsets the `fetch_time` by up to _int_ minutes. Can be used to distribute Home Assistant fetch commands over a longer time frame to avoid peak loads at service providers |
| day_switch_time | time | optional | time of the day in "HH:MM" that Home Assistant dismisses the current entry and moves to the next entry. If no time if provided, the default of "10:00" is used. |
| separator | string | optional | Used to join entries if the multiple values for a single day are returned by the source. If no value is entered, the default of ", " is used |
| day_offset | int | optional | Offset in days to add to the collection date (can be negative). If no value is entered, the default of 0 is used |
## Attributes for _sources_

View File

@@ -6,7 +6,7 @@ Support for schedules provided by [Apps by Abfall+](https://www.abfallplus.de/),
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: APP ID
@@ -49,7 +49,7 @@ If you need to select a first letter of you street name, you can use the city ar
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: de.albagroup.app
@@ -60,7 +60,7 @@ waste_collection_schedule:
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.bonnorange
@@ -71,7 +71,7 @@ waste_collection_schedule:
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.abfallappwug
@@ -81,7 +81,7 @@ waste_collection_schedule:
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.awbgp
@@ -92,7 +92,7 @@ waste_collection_schedule:
```yaml
waste_collection_schedule:
sources:
sources:
- name: app_abfallplus_de
args:
app_id: de.k4systems.leipziglk
@@ -100,6 +100,19 @@ waste_collection_schedule:
bezirk: Brandis
```
```yaml
waste_collection_schedule:
sources:
- name: app_abfallplus_de
args:
app_id: de.abfallwecker
city: Lauchringen
strasse: Bundesstr.
hnr: 20
bundesland: Baden-Württemberg
landkreis: Kreis Waldshut
```
## How to get the source argument
Use the app of your local provider and select your address. Provide all arguments that are requested by the app.
@@ -144,7 +157,7 @@ The app_id can be found from the url of the play store entry: https://play.googl
| de.k4systems.leipziglk | Landkreis Leipzig |
| de.k4systems.abfallappbk | Bad Kissingen |
| de.cmcitymedia.hokwaste | Hohenlohekreis |
| de.abfallwecker | Rottweil, Tuttlingen, Waldshut, Prignitz, Nordsachsen |
| de.abfallwecker | Rottweil, Tuttlingen, Kreis Waldshut, Prignitz, Nordsachsen |
| de.k4systems.abfallappka | Kreis Karlsruhe |
| de.k4systems.lkgoettingen | Abfallwirtschaft Altkreis Göttingen, Abfallwirtschaft Altkreis Osterode am Harz |
| de.k4systems.abfallappcux | Kreis Cuxhaven |

View File

@@ -75,6 +75,7 @@ List of customers (2021-07-09):
- `lra-ab`: Landkreis Aschaffenburg
- `lra-dah`: Landratsamt Dachau
- `lra-mue`: Landkreis Mühldorf a. Inn
- `lra-regensburg`: Landratsamt Regensburg
- `lra-schweinfurt`: Landkreis Schweinfurt
- `memmingen`: Stadt Memmingen
- `neustadt`: Neustadt a.d. Waldnaab

View File

@@ -0,0 +1,51 @@
# Birmingham City Council
Support for schedules provided by [Birmingham City Council](https://www.birmingham.gov.uk/), in the UK.
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: birmingham_gov_uk
args:
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
postcode: POSTCODE
```
### Configuration Variables
**uprn**<br>
*(string)*
The "Unique Property Reference Number" for your address. You can find it by searching for your address at https://www.findmyaddress.co.uk/.
**postcode**<br>
*(string)*
The Post Code for your address. This needs to match the postcode corresponding to your UPRN.
## Example
```yaml
waste_collection_schedule:
sources:
- name: birmingham_gov_uk
args:
uprn: 100070321799
postcode: B27 6TF
```
## Returned Collections
This source will return the next collection date for each container type.
## Returned collection types
### Household Collection
Grey lid rubbish bin is for general waste.
### Recycling Collection
Green lid recycling bin is for dry recycling (metals, glass and plastics).
Blue lid recycling bin is for paper and card.
### Green Recycling Chargeable Collections
Green Recycling (Chargeable Collections).

View File

@@ -0,0 +1,55 @@
# Bromsgrove District Council
Support for schedules provided by [Bromsgrove District Council](https://www.bromsgrove.gov.uk/), in the UK.
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: bromsgrove_gov_uk
args:
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
postcode: POSTCODE
```
### Configuration Variables
**uprn**
*(string)*
The "Unique Property Reference Number" for your address. You can find it by searching for your address at <https://www.findmyaddress.co.uk/>.
**postcode**
*(string)*
The Post Code for your address. This needs to match the postcode corresponding to your UPRN.
## Example
```yaml
waste_collection_schedule:
sources:
- name: bromsgrove_gov_uk
args:
uprn: 10094552413
postcode: B61 8DA
```
## Returned Collections
This source will return the next collection date for each container type.
## Returned collection types
### Household Collection
Grey bin is for general waste.
### Recycling Collection
Green bin is for dry recycling (metals, glass, plastics, paper and card).
### Garden waste Chargeable Collections
Brown bin if for gareden waste. This is a annual chargable service.

View File

@@ -0,0 +1,61 @@
# Borough of Broxbourne Council
Support for schedules provided by [Borough of Broxbourne Council](https://www.broxbourne.gov.uk/), in the UK.
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: broxbourne_gov_uk
args:
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
postcode: POSTCODE
```
### Configuration Variables
**uprn**
*(string)*
The "Unique Property Reference Number" for your address. You can find it by searching for your address at <https://www.findmyaddress.co.uk/>.
**postcode**
*(string)*
The Post Code for your address. This needs to match the postcode corresponding to your UPRN.
## Example
```yaml
waste_collection_schedule:
sources:
- name: broxbourne_gov_uk
args:
uprn: 148028240
postcode: EN11 8PU
```
## Returned Collections
This source will return the next collection date for each container type serviced at your address.
If you don't subscribe to a garden waste bin, we don't return data for it.
## Returned collection types
### Domestic
Black bin for general waste
### Recycling
Black recycling box for mixed recycling
### Green Waste
Green Bin for garden waste.
If you don't pay for a garden waste bin, it won't be included.
### Food
Green or Brown Caddy for food waste.

56
doc/source/bury_gov_uk.md Normal file
View File

@@ -0,0 +1,56 @@
# Bury Council
Support for schedules provided by [Bury Council](https://www.bury.gov.uk/), serving Bury, UK.
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: bury_gov_uk
args:
id: PROPERTY_ID
postcode: POSTCODE
address: ADDRESS
```
### Configuration Variables
**id**<br>
*(string) (optional)*
**postcode**<br>
*(string) (optional)*
**address**<br>
*(string) (optional)*
## Example using UPRN
```yaml
waste_collection_schedule:
sources:
- name: bury_gov_uk
args:
id: "647186"
```
## Example using Address and Postcode
```yaml
waste_collection_schedule:
sources:
- name: bury_gov_uk
args:
address: "1 Oakwood Close"
postcode: "BL8 1DD"
```
## How to find your `PROPERTY_ID`
Your PROPERTY_ID is the collection of numbers at the end of the url when viewing your collection schedule in Developer Tools on the Bury Council web site.
For example: https://www.bury.gov.uk/app-services/getPropertyById?id=647186
You have to navigate to https://www.bury.gov.uk/waste-and-recycling/bin-collection-days-and-alerts, open Dev Tools, Select Network and then input your Postcode and select your Address. The URL should appear as network traffic.

View File

@@ -106,6 +106,7 @@ Support for schedules provided by [App CITIES](https://citiesapps.com), serving
| Lackendorf | [lackendorf.at](https://www.lackendorf.at) |
| Langau | [langau.at](http://www.langau.at) |
| Langenrohr | [langenrohr.gv.at](https://www.langenrohr.gv.at) |
| Leibnitz | [leibnitz.at](https://www.leibnitz.at) |
| Leithaprodersdorf | [leithaprodersdorf.at](http://www.leithaprodersdorf.at) |
| Leutschach an der Weinstraße | [leutschach-weinstrasse.gv.at](https://www.leutschach-weinstrasse.gv.at) |
| Lieboch | [lieboch.gv.at](https://www.lieboch.gv.at) |

View File

@@ -0,0 +1,32 @@
# East Ayrshire Council
Support for schedules provided by [East Ayrshire Council](https://www.east-ayrshire.gov.uk/Housing/RubbishAndRecycling/Collection-days/ViewYourRecyclingCalendar.aspx).
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: east_ayrshire_gov_uk
args:
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
```
### Configuration Variables
**uprn**
*(string) (required)*
## Example
```yaml
waste_collection_schedule:
sources:
- name: east_ayrshire_gov_uk
args:
uprn: "127072649"
```
## How to find your `UPRN`
An easy way to find your Unique Property Reference Number (UPRN) is by going to <https://www.findmyaddress.co.uk/> and entering your address details.

View File

@@ -1,6 +1,6 @@
# Glasgow City Council
Support for schedules provided by [Glasgow City Council](https://www.glasgow.gov.uk/forms/refuseandrecyclingcalendar/AddressSearch.aspx), serving the
Support for schedules provided by [Glasgow City Council](https://onlineservices.glasgow.gov.uk/forms/RefuseAndRecyclingWebApplication/AddressSearch.aspx), serving the
city of Glasgow, UK.
## Configuration via configuration.yaml
@@ -32,4 +32,4 @@ waste_collection_schedule:
The UPRN code can be found by entering your postcode or address on the
[Glasgow City Council Bin Collections page
](https://www.glasgow.gov.uk/forms/refuseandrecyclingcalendar/AddressSearch.aspx). When on the address list click the 'select' link for your address then on the calendar page look in the browser address bar for your UPRN code e.g. https://www.glasgow.gov.uk/forms/refuseandrecyclingcalendar/CollectionsCalendar.aspx?UPRN=YOURUPRNSHOWNHERE.
](https://onlineservices.glasgow.gov.uk/forms/RefuseAndRecyclingWebApplication/AddressSearch.aspx). When on the address list click the 'select' link for your address then on the calendar page look in the browser address bar for your UPRN code e.g. https://onlineservices.glasgow.gov.uk/forms/RefuseAndRecyclingWebApplication/CollectionsCalendar.aspx?UPRN=YOURUPRNSHOWNHERE.

View File

@@ -192,6 +192,7 @@ This source has been successfully tested with the following service providers:
- [Hallesche Wasser und Stadtwirtschaft GmbH](/doc/ics/hws_halle_de.md) / hws-halle.de
- [Heidelberg](/doc/ics/gipsprojekt_de.md) / heidelberg.de
- [Heinz-Entsorgung (Landkreis Freising)](/doc/ics/heinz_entsorgung_de.md) / heinz-entsorgung.de
- [Herten (durth-roos.de)](/doc/ics/herten_de.md) / herten.de
- [Kreisstadt Groß-Gerau](/doc/ics/gross_gerau_de.md) / gross-gerau.de
- [Landkreis Anhalt-Bitterfeld](/doc/ics/abikw_de.md) / abikw.de
- [Landkreis Böblingen](/doc/ics/abfall_app_net.md) / lrabb.de
@@ -246,11 +247,13 @@ This source has been successfully tested with the following service providers:
- [Anglesey](/doc/ics/anglesey_gov_wales.md) / anglesey.gov.wales
- [Falkirk](/doc/ics/falkirk_gov_uk.md) / falkirk.gov.uk
- [Gedling Borough Council (unofficial)](/doc/ics/gedling_gov_uk.md) / github.com/jamesmacwhite/gedling-borough-council-bin-calendars
- [Westmorland & Furness Council, Barrow area](/doc/ics/barrowbc_gov_uk.md) / barrowbc.gov.uk
- [Westmorland & Furness Council, South Lakeland area](/doc/ics/southlakeland_gov_uk.md) / southlakeland.gov.uk
### United States of America
- [City of Austin, TX](/doc/ics/recollect.md) / austintexas.gov
- [City of Bloomington](/doc/ics/recollect.md) / bloomington.in.gov
- [City of Cambridge](/doc/ics/recollect.md) / cambridgema.gov
- [City of Gastonia, NC](/doc/ics/recollect.md) / gastonianc.gov

View File

@@ -0,0 +1,28 @@
# Kiertokapula Finland
Support for upcoming pick ups provided by [Kiertokapula self-service portal](https://asiakasnetti.kiertokapula.fi/).
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: kiertokapula_fi
args:
bill_number: "YOUR_BILL_NUMBER"
password: "YOUR_PASSWORD"
```
### Configuration Variables
**bill_number**
*(string) (required)*
**password**
*(string) (required)*
## How to get the source argument
**You need to have a registered account in Kiertokapula's self-service portal!**
To register one, you need to get your customer number from your bills, and password is by default your home address.
System will prompt you a password change, after you've done it, you have now registered your user and it's ready to be configured here.

View File

@@ -0,0 +1,32 @@
# North Ayrshire Council
Support for schedules provided by [North Ayrshire Council](https://www.north-ayrshire.gov.uk/).
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: north_ayrshire_gov_uk
args:
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
```
### Configuration Variables
**uprn**
*(string) (required)*
## Example
```yaml
waste_collection_schedule:
sources:
- name: north_ayrshire_gov_uk
args:
uprn: "126043248"
```
## How to find your `UPRN`
An easy way to find your Unique Property Reference Number (UPRN) is by going to <https://www.findmyaddress.co.uk/> and entering your address details.

View File

@@ -0,0 +1,37 @@
# Renfrewshire Council
Support for schedules provided by [Renfrewshire Council](https://www.renfrewshire.gov.uk/article/2320/Check-your-bin-collection-day).
## Configuration via configuration.yaml
```yaml
waste_collection_schedule:
sources:
- name: renfrewshire_gov_uk
args:
postcode: POSTCODE
uprn: UNIQUE_PROPERTY_REFERENCE_NUMBER
```
### Configuration Variables
**postcode**
*(string) (required)*
**uprn**
*(string) (required)*
## Example
```yaml
waste_collection_schedule:
sources:
- name: renfrewshire_gov_uk
args:
postcode: "PA12 4AJ"
uprn: "123034174"
```
## How to find your `UPRN`
An easy way to find your Unique Property Reference Number (UPRN) is by going to <https://www.findmyaddress.co.uk/> and entering your address details.

View File

@@ -1,5 +1,7 @@
# Wermelskirchen Abfallkalender
!!!! The IT service provider was hit by a disastrous cyber attack in October of 2023. Since then this API does not work anymore and might never in the future (at least in the same form). !!!!
Support for schedules provided by [Abfallkalender Wermelskirchen](https://www.wermelskirchen.de/rathaus/buergerservice/formulare-a-z/abfallkalender-online/) located in NRW, Germany.
## Limitations

File diff suppressed because one or more lines are too long

View File

@@ -46,9 +46,6 @@ def split_camel_and_snake_case(s):
def main():
parser = argparse.ArgumentParser(description="Update docu links.")
# args = parser.parse_args()
sources = []
sources += browse_sources()
@@ -600,6 +597,10 @@ COUNTRYCODES = [
"code": "fr",
"name": "France",
},
{
"code": "fi",
"name": "Finland",
},
]
if __name__ == "__main__":