mirror of
https://github.com/sascha-hemi/hacs_waste_collection_schedule.git
synced 2026-03-21 04:06:03 +01:00
Merge branch 'mampfes:master' into master
This commit is contained in:
@@ -21,7 +21,7 @@ repos:
|
||||
- --skip="./.*,*.csv,*.json"
|
||||
- --quiet-level=2
|
||||
exclude_types: [csv, json]
|
||||
- repo: https://gitlab.com/pycqa/flake8
|
||||
- repo: https://github.com/pycqa/flake8
|
||||
rev: 3.9.2
|
||||
hooks:
|
||||
- id: flake8
|
||||
|
||||
73
README.md
73
README.md
@@ -130,9 +130,11 @@ Currently the following service providers are supported:
|
||||
- [Stadtreinigung.Hamburg](./doc/source/stadtreinigung_hamburg.md)
|
||||
- [Stadtreinigung-Leipzig.de](./doc/source/stadtreinigung_leipzig_de.md)
|
||||
- [Stadt-Willich.de](.doc/source/stadt_willich_de.md)
|
||||
- [Städteservice Raunheim Rüsselsheim](./doc/source/staedteservice_de.md)
|
||||
- [Südbrandenburgischer Abfallzweckverband](./doc/source/sbazv_de.md)
|
||||
- [Umweltbetrieb Stadt Bielefeld](./doc/source/bielefeld_de.md)
|
||||
- [WAS Wolfsburg](./doc/source/was_wolfsburg_de.md)
|
||||
- [Wermelskirchen](./doc/source/wermelskirchen_de.md)
|
||||
|
||||
### Lithuania
|
||||
|
||||
@@ -148,10 +150,11 @@ Currently the following service providers are supported:
|
||||
- [Auckland](./doc/source/aucklandcouncil_govt_nz.md)
|
||||
- [Christchurch](./doc/source/ccc_govt_nz.md)
|
||||
- [Gore, Invercargill & Southland](./doc/source/wastenet_org_nz.md)
|
||||
- [Horowhenua District](./doc/source/horowhenua_govt_nz.md)
|
||||
- [Waipa District](./doc/source/waipa_nz.md)
|
||||
- [Wellington](./doc/source/wellington_govt_nz.md)
|
||||
|
||||
## Norway
|
||||
### Norway
|
||||
|
||||
- [Min Renovasjon](./doc/source/minrenovasjon_no.md)
|
||||
- [Oslo Kommune](./doc/source/oslokommune_no.md)
|
||||
@@ -196,9 +199,11 @@ Currently the following service providers are supported:
|
||||
- [Guildford Borough Council - guildford.gov.uk](./doc/source/guildford_gov_uk.md)
|
||||
- [Harborough District Council - www.harborough.gov.uk](./doc/source/fccenvironment_co_uk.md)
|
||||
- [Huntingdonshire District Council - huntingdonshire.gov.uk](./doc/source/huntingdonshire_gov_uk.md)
|
||||
- [The Royal Borough of Kingston - kingston.gov.uk](./doc/source/kingston_gov_uk.md)
|
||||
- [Lewes District Council - lewes-eastbourne.gov.uk](./doc/source/environmentfirst_co_uk.md)
|
||||
- [London Borough of Lewisham - lewisham.gov.uk](.doc/source/lewisham_gov_uk.md)
|
||||
- [Manchester City Council - manchester.gov.uk](./doc/source/manchester_uk.md)
|
||||
- [Middlesbrough Countil - middlesbrough.gov.uk](./doc/source/middlesbrough_gov_uk.md)
|
||||
- [Newcastle City Council - newcastle.gov.uk](./doc/source/newcastle_gov_uk.md)
|
||||
- [North Somerset Council - n-somerset.gov.uk](./doc/source/nsomerset_gov_uk.md)
|
||||
- [Nottingham City Council - nottinghamcity.gov.uk](./doc/source/nottingham_city_gov_uk.md)
|
||||
@@ -209,6 +214,7 @@ Currently the following service providers are supported:
|
||||
- [South Cambridgeshire District Council - scambs.gov.uk](./doc/source/scambs_gov_uk.md)
|
||||
- [South Norfolk and Broadland Council - southnorfolkandbroadland.gov.uk](./doc/source/south_norfolk_and_broadland_gov_uk.md)
|
||||
- [Stevenage Borough Council - stevenage.gov.uk](./doc/source/stevenage_gov_uk.md)
|
||||
- [Tewkesbury Borough Council](./doc/source/tewkesbury_gov_uk.md)
|
||||
- [City of York Council - york.gov.uk](./doc/source/york_gov_uk.md)
|
||||
- [Walsall Council - walsall.gov.uk](./doc/source/walsall_gov_uk.md)
|
||||
- [West Berkshire Council - westberks.gov.uk](./doc/source/westberks_gov_uk.md)
|
||||
@@ -372,7 +378,7 @@ Create a dedicated calendar for this type.
|
||||
|
||||
*(string) (optional, default: ```None```)*
|
||||
|
||||
Optional title of the dedicated calendar. If not set, the default of the source will be used.
|
||||
Optional title of the dedicated calendar. If not set, the waste type will be used.
|
||||
|
||||
## 2. Add sensor(s) to a source
|
||||
|
||||
@@ -398,9 +404,18 @@ sensor:
|
||||
|
||||
**source_index**
|
||||
|
||||
*(integer) (optional, default: ```0```)*
|
||||
*(integer or list of integers) (optional, default: ```0```)*
|
||||
|
||||
Reference to source (service provider). Used to assign a sensor to a specific source. Only required if you defined more than one source. The first defined source has the source_index 0, the second source 1, ...
|
||||
Reference to source (service provider). Used to assign a sensor to a specific source. Only required if you defined more than one source. The first defined source in `configuration.yaml` has the source_index 0, the second source 1, ...
|
||||
If you want to have a sensor which combines the data from multiple sources, just add a list of sources here.
|
||||
Example:
|
||||
```yaml
|
||||
source_index: [0, 1]
|
||||
#or
|
||||
source_index:
|
||||
- 0
|
||||
- 1
|
||||
```
|
||||
|
||||
**name**
|
||||
|
||||
@@ -424,7 +439,7 @@ Possible choices:
|
||||
|
||||

|
||||
|
||||
- ```generic``` provides all attributes as generic Python data types. This can be used by a specialized Lovelace card (which doesn't exist so far).<br>
|
||||
- ```generic``` provides all attributes as generic Python data types. This can be used by a specialized Lovelace card (which doesn't exist so far).
|
||||
|
||||

|
||||
|
||||
@@ -480,7 +495,17 @@ The following variables can be used within `value_template` and `date_template`:
|
||||
|
||||
## FAQ
|
||||
|
||||
### 1. How do I format dates?
|
||||
### 1. My Service Provider isn't supported. What can I do?
|
||||
|
||||
1. A lot of service providers provide ICS/iCal data as downloads or persistent links. This can be used together with the generic [iCS/iCal](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ics.md) source.
|
||||
|
||||
2. In case your schedule follows a static schema, you can use the [static](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/static.md) source.
|
||||
|
||||
3. Implement a new [source](https://github.com/mampfes/hacs_waste_collection_schedule#how-to-add-new-sources) and create a PR.
|
||||
|
||||
4. Raise an [issue](https://github.com/mampfes/hacs_waste_collection_schedule/issues).
|
||||
|
||||
### 2. How do I format dates?
|
||||
|
||||
Use [strftime](https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) in `value_template` or `date_template`:
|
||||
|
||||
@@ -498,7 +523,7 @@ value_template: '{{value.date.strftime("%a, %m/%d/%Y")}}'
|
||||
date_template: '{{value.date.strftime("%a, %m/%d/%Y")}}'
|
||||
```
|
||||
|
||||
### 2. How do I show the number of days to the next collection?
|
||||
### 3. How do I show the number of days to the next collection?
|
||||
|
||||
Set `value_template` within the sensor configuration:
|
||||
|
||||
@@ -506,7 +531,7 @@ Set `value_template` within the sensor configuration:
|
||||
value_template: 'in {{value.daysTo}} days'
|
||||
```
|
||||
|
||||
### 3. How do I show *Today* / *Tomorrow* instead of *in 0/1 days*?
|
||||
### 4. How do I show *Today* / *Tomorrow* instead of *in 0/1 days*?
|
||||
|
||||
Set `value_template` within the sensor configuration:
|
||||
|
||||
@@ -517,7 +542,7 @@ Set `value_template` within the sensor configuration:
|
||||
value_template: '{% if value.daysTo == 0 %}Today{% elif value.daysTo == 1 %}Tomorrow{% else %}in {{value.daysTo}} days{% endif %}'
|
||||
```
|
||||
|
||||
### 4. How do I join waste types in a `value_template`?
|
||||
### 5. How do I join waste types in a `value_template`?
|
||||
|
||||
Use the `join` filter:
|
||||
|
||||
@@ -531,7 +556,7 @@ value_template: '{{value.types|join("+")}}'
|
||||
|
||||
Note: If you don't specify a `value_template`, waste types will be joined using the `separator` configuration variable.
|
||||
|
||||
### 5. How do I setup a sensor which shows only the days to the next collection?
|
||||
### 6. How do I setup a sensor which shows only the days to the next collection?
|
||||
|
||||
Set `value_template` within the sensor configuration:
|
||||
|
||||
@@ -539,7 +564,7 @@ Set `value_template` within the sensor configuration:
|
||||
value_template: '{{value.daysTo}}'
|
||||
```
|
||||
|
||||
### 6. How do I setup a sensor which shows only the date of the next collection?
|
||||
### 7. How do I setup a sensor which shows only the date of the next collection?
|
||||
|
||||
Set `value_template` within the sensor configuration:
|
||||
|
||||
@@ -547,7 +572,7 @@ Set `value_template` within the sensor configuration:
|
||||
value_template: '{{value.date.strftime("%m/%d/%Y")}}'
|
||||
```
|
||||
|
||||
### 7. How do I configure a sensor which shows only the waste type of the next collection?
|
||||
### 8. How do I configure a sensor which shows only the waste type of the next collection?
|
||||
|
||||
Set `value_template` within the sensor configuration:
|
||||
|
||||
@@ -555,7 +580,7 @@ Set `value_template` within the sensor configuration:
|
||||
value_template: '{{value.types|join(", ")}}'
|
||||
```
|
||||
|
||||
### 8. How do I configure a sensor to show only collections of a specific waste type?
|
||||
### 9. How do I configure a sensor to show only collections of a specific waste type?
|
||||
|
||||
Set `types` within the sensor configuration:
|
||||
|
||||
@@ -574,7 +599,7 @@ sensor:
|
||||
|
||||
Note: If you have set an alias for a waste type, you must use the alias name.
|
||||
|
||||
### 9. How can I rename an waste type?
|
||||
### 10. How can I rename an waste type?
|
||||
|
||||
Set `alias` in the customize section of a source:
|
||||
|
||||
@@ -589,7 +614,7 @@ waste_collection_schedule:
|
||||
alias: Recycle
|
||||
```
|
||||
|
||||
### 10. How can I hide inappropriate waste types?
|
||||
### 11. How can I hide inappropriate waste types?
|
||||
|
||||
Set `show` configuration variable to *false* in the customize section of a source:
|
||||
|
||||
@@ -602,7 +627,7 @@ waste_collection_schedule:
|
||||
show: false
|
||||
```
|
||||
|
||||
### 11. How do I show a colored Lovelace card depending on the due date?
|
||||
### 12. How do I show a colored Lovelace card depending on the due date?
|
||||
|
||||
You can use [Button Card](https://github.com/custom-cards/button-card) to create a colored Lovelace cards:
|
||||
|
||||
@@ -647,7 +672,7 @@ state:
|
||||
- value: default
|
||||
```
|
||||
|
||||
### 12. Can I also use the **Garbage Collection Card** instead?
|
||||
### 13. Can I also use the **Garbage Collection Card** instead?
|
||||
|
||||
Yes, the [Garbage Collection Card](https://github.com/amaximus/garbage-collection-card) can also be used with *Waste Collection Schedule*:
|
||||
|
||||
@@ -681,12 +706,24 @@ entity: sensor.garbage
|
||||
type: 'custom:garbage-collection-card'
|
||||
```
|
||||
|
||||
### 13. How can I sort waste type specific entities?
|
||||
### 14. How can I sort waste type specific entities?
|
||||
|
||||
Prerequisites: You already have dedicated sensors per waste type and want to show the sensor with the next collection in a Lovelace card.
|
||||
|
||||
Add `add_days_to: True` to the configuration of all sensors you want to sort. This will add the attribute `daysTo` which can be used by e.g. [auto-entities](https://github.com/thomasloven/lovelace-auto-entities) to sort entities by day of next collection.
|
||||
|
||||
### 15. How can I disable the calendar?
|
||||
|
||||
If you don't like the calendar provided by Waste Collection Schedule or you have configured some dedicated calendars per waste type and therefore don't need the global calendar any more, you can disable it so that it doesn't show up in the Calendar Dashboard any more:
|
||||
|
||||
Go to `Settings` --> `Entities` and select the calendar entity provided by Waste Collection Schedule. Now disable it using the menu items.
|
||||
|
||||
[](https://my.home-assistant.io/redirect/entities/)
|
||||
|
||||
### 16. I have configured multiple sources, but the sensors show only *UNAVAILABLE*
|
||||
|
||||
You probably missed to add `source_index` to the sensor configuration.
|
||||
|
||||
## How to add new sources
|
||||
|
||||
1. Create a new source in folder `custom_components/waste_collection_schedule/waste_collection_schedule/source` with the lower case url of your service provider (e.g. `abc_com.py` for `http://www.abc.com`).
|
||||
|
||||
@@ -18,13 +18,13 @@ from homeassistant.helpers.event import async_track_time_change # isort:skip
|
||||
# add module directory to path
|
||||
package_dir = Path(__file__).resolve().parents[0]
|
||||
site.addsitedir(str(package_dir))
|
||||
from waste_collection_schedule import Customize, Scraper # type: ignore # isort:skip # noqa: E402
|
||||
from waste_collection_schedule import Customize, SourceShell # type: ignore # isort:skip # noqa: E402
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
CONF_SOURCES = "sources"
|
||||
CONF_SOURCE_NAME = "name"
|
||||
CONF_SOURCE_ARGS = "args" # scraper-source arguments
|
||||
CONF_SOURCE_ARGS = "args" # source arguments
|
||||
CONF_SOURCE_CALENDAR_TITLE = "calendar_title"
|
||||
CONF_SEPARATOR = "separator"
|
||||
CONF_FETCH_TIME = "fetch_time"
|
||||
@@ -92,7 +92,7 @@ async def async_setup(hass: HomeAssistant, config: dict):
|
||||
day_switch_time=config[DOMAIN][CONF_DAY_SWITCH_TIME],
|
||||
)
|
||||
|
||||
# create scraper(s)
|
||||
# create shells for source(s)
|
||||
for source in config[DOMAIN][CONF_SOURCES]:
|
||||
# create customize object
|
||||
customize = {}
|
||||
@@ -106,7 +106,7 @@ async def async_setup(hass: HomeAssistant, config: dict):
|
||||
use_dedicated_calendar=c.get(CONF_USE_DEDICATED_CALENDAR, False),
|
||||
dedicated_calendar_title=c.get(CONF_DEDICATED_CALENDAR_TITLE, False),
|
||||
)
|
||||
api.add_scraper(
|
||||
api.add_source_shell(
|
||||
source_name=source[CONF_SOURCE_NAME],
|
||||
customize=customize,
|
||||
calendar_title=source.get(CONF_SOURCE_CALENDAR_TITLE),
|
||||
@@ -132,7 +132,7 @@ class WasteCollectionApi:
|
||||
self, hass, separator, fetch_time, random_fetch_time_offset, day_switch_time
|
||||
):
|
||||
self._hass = hass
|
||||
self._scrapers = []
|
||||
self._source_shells = []
|
||||
self._separator = separator
|
||||
self._fetch_time = fetch_time
|
||||
self._random_fetch_time_offset = random_fetch_time_offset
|
||||
@@ -183,15 +183,15 @@ class WasteCollectionApi:
|
||||
"""When to hide entries for today."""
|
||||
return self._day_switch_time
|
||||
|
||||
def add_scraper(
|
||||
def add_source_shell(
|
||||
self,
|
||||
source_name,
|
||||
customize,
|
||||
source_args,
|
||||
calendar_title,
|
||||
):
|
||||
self._scrapers.append(
|
||||
Scraper.create(
|
||||
self._source_shells.append(
|
||||
SourceShell.create(
|
||||
source_name=source_name,
|
||||
customize=customize,
|
||||
source_args=source_args,
|
||||
@@ -200,17 +200,17 @@ class WasteCollectionApi:
|
||||
)
|
||||
|
||||
def _fetch(self, *_):
|
||||
for scraper in self._scrapers:
|
||||
scraper.fetch()
|
||||
for shell in self._source_shells:
|
||||
shell.fetch()
|
||||
|
||||
self._update_sensors_callback()
|
||||
|
||||
@property
|
||||
def scrapers(self):
|
||||
return self._scrapers
|
||||
def shells(self):
|
||||
return self._source_shells
|
||||
|
||||
def get_scraper(self, index):
|
||||
return self._scrapers[index] if index < len(self._scrapers) else None
|
||||
def get_shell(self, index):
|
||||
return self._source_shells[index] if index < len(self._source_shells) else None
|
||||
|
||||
@callback
|
||||
def _fetch_callback(self, *_):
|
||||
|
||||
@@ -6,7 +6,13 @@ from datetime import datetime, timedelta
|
||||
from homeassistant.components.calendar import CalendarEntity, CalendarEvent
|
||||
from homeassistant.core import HomeAssistant
|
||||
|
||||
from custom_components.waste_collection_schedule.waste_collection_schedule.scraper import Scraper
|
||||
# fmt: off
|
||||
from custom_components.waste_collection_schedule.waste_collection_schedule.collection_aggregator import \
|
||||
CollectionAggregator
|
||||
from custom_components.waste_collection_schedule.waste_collection_schedule.source_shell import \
|
||||
SourceShell
|
||||
|
||||
# fmt: on
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
@@ -21,52 +27,52 @@ async def async_setup_platform(hass, config, async_add_entities, discovery_info=
|
||||
|
||||
api = discovery_info["api"]
|
||||
|
||||
for scraper in api.scrapers:
|
||||
dedicated_calendar_types = scraper.get_dedicated_calendar_types()
|
||||
global_calendar_types = scraper.get_global_calendar_types()
|
||||
|
||||
if dedicated_calendar_types is not None:
|
||||
for type in dedicated_calendar_types:
|
||||
unique_id = calc_unique_calendar_id(scraper, type)
|
||||
|
||||
entities.append(
|
||||
WasteCollectionCalendar(
|
||||
api,
|
||||
scraper,
|
||||
scraper.get_calendar_title_for_type(type),
|
||||
[scraper.get_collection_type(type)],
|
||||
unique_id,
|
||||
)
|
||||
)
|
||||
|
||||
if global_calendar_types is not None or dedicated_calendar_types is None:
|
||||
unique_id = calc_unique_calendar_id(scraper)
|
||||
for shell in api.shells:
|
||||
dedicated_calendar_types = shell.get_dedicated_calendar_types()
|
||||
for type in dedicated_calendar_types:
|
||||
entities.append(
|
||||
WasteCollectionCalendar(
|
||||
api,
|
||||
scraper,
|
||||
scraper.calendar_title,
|
||||
[
|
||||
scraper.get_collection_type(type)
|
||||
for type in global_calendar_types
|
||||
]
|
||||
if global_calendar_types is not None
|
||||
else None,
|
||||
unique_id,
|
||||
api=api,
|
||||
aggregator=CollectionAggregator([shell]),
|
||||
name=shell.get_calendar_title_for_type(type),
|
||||
include_types={shell.get_collection_type_name(type)},
|
||||
unique_id=calc_unique_calendar_id(shell, type),
|
||||
)
|
||||
)
|
||||
|
||||
entities.append(
|
||||
WasteCollectionCalendar(
|
||||
api=api,
|
||||
aggregator=CollectionAggregator([shell]),
|
||||
name=shell.calendar_title,
|
||||
exclude_types={
|
||||
shell.get_collection_type_name(type)
|
||||
for type in dedicated_calendar_types
|
||||
},
|
||||
unique_id=calc_unique_calendar_id(shell),
|
||||
)
|
||||
)
|
||||
|
||||
async_add_entities(entities)
|
||||
|
||||
|
||||
class WasteCollectionCalendar(CalendarEntity):
|
||||
"""Calendar entity class."""
|
||||
|
||||
def __init__(self, api, scraper, name, types, unique_id: str):
|
||||
def __init__(
|
||||
self,
|
||||
api,
|
||||
aggregator,
|
||||
name,
|
||||
unique_id: str,
|
||||
include_types=None,
|
||||
exclude_types=None,
|
||||
):
|
||||
self._api = api
|
||||
self._scraper = scraper
|
||||
self._aggregator = aggregator
|
||||
self._name = name
|
||||
self._types = types
|
||||
self._include_types = include_types
|
||||
self._exclude_types = exclude_types
|
||||
self._unique_id = unique_id
|
||||
self._attr_unique_id = unique_id
|
||||
|
||||
@@ -78,8 +84,11 @@ class WasteCollectionCalendar(CalendarEntity):
|
||||
@property
|
||||
def event(self):
|
||||
"""Return next collection event."""
|
||||
collections = self._scraper.get_upcoming(
|
||||
count=1, include_today=True, types=self._types
|
||||
collections = self._aggregator.get_upcoming(
|
||||
count=1,
|
||||
include_today=True,
|
||||
include_types=self._include_types,
|
||||
exclude_types=self._exclude_types,
|
||||
)
|
||||
|
||||
if len(collections) == 0:
|
||||
@@ -93,8 +102,10 @@ class WasteCollectionCalendar(CalendarEntity):
|
||||
"""Return all events within specified time span."""
|
||||
events = []
|
||||
|
||||
for collection in self._scraper.get_upcoming(
|
||||
include_today=True, types=self._types
|
||||
for collection in self._aggregator.get_upcoming(
|
||||
include_today=True,
|
||||
include_types=self._include_types,
|
||||
exclude_types=self._exclude_types,
|
||||
):
|
||||
event = self._convert(collection)
|
||||
|
||||
@@ -112,5 +123,5 @@ class WasteCollectionCalendar(CalendarEntity):
|
||||
)
|
||||
|
||||
|
||||
def calc_unique_calendar_id(scraper: Scraper, type: str = None):
|
||||
return scraper.unique_id + ("_" + type if type is not None else "") + "_calendar"
|
||||
def calc_unique_calendar_id(shell: SourceShell, type: str = None):
|
||||
return shell.unique_id + ("_" + type if type is not None else "") + "_calendar"
|
||||
|
||||
@@ -11,8 +11,15 @@ from homeassistant.const import CONF_NAME, CONF_VALUE_TEMPLATE
|
||||
from homeassistant.core import callback
|
||||
from homeassistant.helpers.dispatcher import async_dispatcher_connect
|
||||
|
||||
# fmt: off
|
||||
from custom_components.waste_collection_schedule.waste_collection_schedule.collection_aggregator import \
|
||||
CollectionAggregator
|
||||
|
||||
from .const import DOMAIN, UPDATE_SENSORS_SIGNAL
|
||||
|
||||
# fmt: on
|
||||
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
CONF_SOURCE_INDEX = "source_index"
|
||||
@@ -35,7 +42,9 @@ class DetailsFormat(Enum):
|
||||
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend(
|
||||
{
|
||||
vol.Required(CONF_NAME): cv.string,
|
||||
vol.Optional(CONF_SOURCE_INDEX, default=0): cv.positive_int,
|
||||
vol.Optional(CONF_SOURCE_INDEX, default=0): vol.Any(
|
||||
cv.positive_int, vol.All(cv.ensure_list, [cv.positive_int])
|
||||
), # can be a scalar or a list
|
||||
vol.Optional(CONF_DETAILS_FORMAT, default="upcoming"): cv.enum(DetailsFormat),
|
||||
vol.Optional(CONF_COUNT): cv.positive_int,
|
||||
vol.Optional(CONF_LEADTIME): cv.positive_int,
|
||||
@@ -56,14 +65,22 @@ async def async_setup_platform(hass, config, async_add_entities, discovery_info=
|
||||
if date_template is not None:
|
||||
date_template.hass = hass
|
||||
|
||||
api = hass.data[DOMAIN]
|
||||
|
||||
# create aggregator for all sources
|
||||
source_index = config[CONF_SOURCE_INDEX]
|
||||
if not isinstance(source_index, list):
|
||||
source_index = [source_index]
|
||||
aggregator = CollectionAggregator([api.get_shell(i) for i in source_index])
|
||||
|
||||
entities = []
|
||||
|
||||
entities.append(
|
||||
ScheduleSensor(
|
||||
hass=hass,
|
||||
api=hass.data[DOMAIN],
|
||||
api=api,
|
||||
name=config[CONF_NAME],
|
||||
source_index=config[CONF_SOURCE_INDEX],
|
||||
aggregator=aggregator,
|
||||
details_format=config[CONF_DETAILS_FORMAT],
|
||||
count=config.get(CONF_COUNT),
|
||||
leadtime=config.get(CONF_LEADTIME),
|
||||
@@ -85,7 +102,7 @@ class ScheduleSensor(SensorEntity):
|
||||
hass,
|
||||
api,
|
||||
name,
|
||||
source_index,
|
||||
aggregator,
|
||||
details_format,
|
||||
count,
|
||||
leadtime,
|
||||
@@ -96,7 +113,7 @@ class ScheduleSensor(SensorEntity):
|
||||
):
|
||||
"""Initialize the entity."""
|
||||
self._api = api
|
||||
self._source_index = source_index
|
||||
self._aggregator = aggregator
|
||||
self._details_format = details_format
|
||||
self._count = count
|
||||
self._leadtime = leadtime
|
||||
@@ -123,10 +140,6 @@ class ScheduleSensor(SensorEntity):
|
||||
"""Entities have been added to hass."""
|
||||
self._update_sensor()
|
||||
|
||||
@property
|
||||
def _scraper(self):
|
||||
return self._api.get_scraper(self._source_index)
|
||||
|
||||
@property
|
||||
def _separator(self):
|
||||
"""Return separator string used to join waste types."""
|
||||
@@ -140,8 +153,8 @@ class ScheduleSensor(SensorEntity):
|
||||
def _add_refreshtime(self):
|
||||
"""Add refresh-time (= last fetch time) to device-state-attributes."""
|
||||
refreshtime = ""
|
||||
if self._scraper.refreshtime is not None:
|
||||
refreshtime = self._scraper.refreshtime.strftime("%x %X")
|
||||
if self._aggregator.refreshtime is not None:
|
||||
refreshtime = self._aggregator.refreshtime.strftime("%x %X")
|
||||
self._attr_attribution = f"Last update: {refreshtime}"
|
||||
|
||||
def _set_state(self, upcoming):
|
||||
@@ -179,14 +192,15 @@ class ScheduleSensor(SensorEntity):
|
||||
def _update_sensor(self):
|
||||
"""Update the state and the device-state-attributes of the entity.
|
||||
|
||||
Called if a new data has been fetched from the scraper source.
|
||||
Called if a new data has been fetched from the source.
|
||||
"""
|
||||
if self._scraper is None:
|
||||
_LOGGER.error(f"source_index {self._source_index} out of range")
|
||||
if self._aggregator is None:
|
||||
return None
|
||||
|
||||
upcoming1 = self._scraper.get_upcoming_group_by_day(
|
||||
count=1, types=self._collection_types, include_today=self._include_today,
|
||||
upcoming1 = self._aggregator.get_upcoming_group_by_day(
|
||||
count=1,
|
||||
include_types=self._collection_types,
|
||||
include_today=self._include_today,
|
||||
)
|
||||
|
||||
self._set_state(upcoming1)
|
||||
@@ -194,17 +208,17 @@ class ScheduleSensor(SensorEntity):
|
||||
attributes = {}
|
||||
|
||||
collection_types = (
|
||||
sorted(self._scraper.get_types())
|
||||
sorted(self._aggregator.types)
|
||||
if self._collection_types is None
|
||||
else self._collection_types
|
||||
)
|
||||
|
||||
if self._details_format == DetailsFormat.upcoming:
|
||||
# show upcoming events list in details
|
||||
upcoming = self._scraper.get_upcoming_group_by_day(
|
||||
upcoming = self._aggregator.get_upcoming_group_by_day(
|
||||
count=self._count,
|
||||
leadtime=self._leadtime,
|
||||
types=self._collection_types,
|
||||
include_types=self._collection_types,
|
||||
include_today=self._include_today,
|
||||
)
|
||||
for collection in upcoming:
|
||||
@@ -214,8 +228,8 @@ class ScheduleSensor(SensorEntity):
|
||||
elif self._details_format == DetailsFormat.appointment_types:
|
||||
# show list of collections in details
|
||||
for t in collection_types:
|
||||
collections = self._scraper.get_upcoming(
|
||||
count=1, types=[t], include_today=self._include_today
|
||||
collections = self._aggregator.get_upcoming(
|
||||
count=1, include_types=[t], include_today=self._include_today
|
||||
)
|
||||
date = (
|
||||
"" if len(collections) == 0 else self._render_date(collections[0])
|
||||
@@ -224,15 +238,15 @@ class ScheduleSensor(SensorEntity):
|
||||
elif self._details_format == DetailsFormat.generic:
|
||||
# insert generic attributes into details
|
||||
attributes["types"] = collection_types
|
||||
attributes["upcoming"] = self._scraper.get_upcoming(
|
||||
attributes["upcoming"] = self._aggregator.get_upcoming(
|
||||
count=self._count,
|
||||
leadtime=self._leadtime,
|
||||
types=self._collection_types,
|
||||
include_types=self._collection_types,
|
||||
include_today=self._include_today,
|
||||
)
|
||||
refreshtime = ""
|
||||
if self._scraper.refreshtime is not None:
|
||||
refreshtime = self._scraper.refreshtime.isoformat(timespec="seconds")
|
||||
if self._aggregator.refreshtime is not None:
|
||||
refreshtime = self._aggregator.refreshtime.isoformat(timespec="seconds")
|
||||
attributes["last_update"] = refreshtime
|
||||
|
||||
if len(upcoming1) > 0:
|
||||
|
||||
@@ -1,2 +1,3 @@
|
||||
from .collection import Collection, CollectionBase, CollectionGroup # type: ignore # isort:skip # noqa: F401
|
||||
from .scraper import Customize, Scraper # noqa: F401
|
||||
from .collection_aggregator import CollectionAggregator # noqa: F401
|
||||
from .source_shell import Customize, SourceShell # noqa: F401
|
||||
|
||||
@@ -0,0 +1,121 @@
|
||||
import itertools
|
||||
import logging
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from .collection import CollectionGroup
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class CollectionAggregator:
|
||||
def __init__(self, shells):
|
||||
self._shells = shells
|
||||
|
||||
@property
|
||||
def _entries(self):
|
||||
"""Merge all entries from all connected sources."""
|
||||
return [e for s in self._shells for e in s._entries]
|
||||
|
||||
@property
|
||||
def refreshtime(self):
|
||||
"""Simply return the timestamp of the first source."""
|
||||
return self._shells[0].refreshtime
|
||||
|
||||
@property
|
||||
def types(self):
|
||||
"""Return set() of all collection types."""
|
||||
return {e.type for e in self._entries}
|
||||
|
||||
def get_upcoming(
|
||||
self,
|
||||
count=None,
|
||||
leadtime=None,
|
||||
include_types=None,
|
||||
exclude_types=None,
|
||||
include_today=False,
|
||||
):
|
||||
"""Return list of all entries, limited by count and/or leadtime.
|
||||
|
||||
Keyword arguments:
|
||||
count -- limits the number of returned entries (default=10)
|
||||
leadtime -- limits the timespan in days of returned entries (default=7, 0 = today)
|
||||
"""
|
||||
return self._filter(
|
||||
self._entries,
|
||||
count=count,
|
||||
leadtime=leadtime,
|
||||
include_types=include_types,
|
||||
exclude_types=exclude_types,
|
||||
include_today=include_today,
|
||||
)
|
||||
|
||||
def get_upcoming_group_by_day(
|
||||
self,
|
||||
count=None,
|
||||
leadtime=None,
|
||||
include_types=None,
|
||||
exclude_types=None,
|
||||
include_today=False,
|
||||
):
|
||||
"""Return list of all entries, grouped by day, limited by count and/or leadtime."""
|
||||
entries = []
|
||||
|
||||
iterator = itertools.groupby(
|
||||
self._filter(
|
||||
self._entries,
|
||||
leadtime=leadtime,
|
||||
include_types=include_types,
|
||||
exclude_types=exclude_types,
|
||||
include_today=include_today,
|
||||
),
|
||||
lambda e: e.date,
|
||||
)
|
||||
|
||||
for key, group in iterator:
|
||||
entries.append(CollectionGroup.create(list(group)))
|
||||
if count is not None:
|
||||
entries = entries[:count]
|
||||
|
||||
return entries
|
||||
|
||||
def _filter(
|
||||
self,
|
||||
entries,
|
||||
count=None,
|
||||
leadtime=None,
|
||||
include_types=None,
|
||||
exclude_types=None,
|
||||
include_today=False,
|
||||
):
|
||||
# remove unwanted waste types from include list
|
||||
if include_types is not None:
|
||||
entries = list(
|
||||
filter(lambda e: e.type in set(include_types), self._entries)
|
||||
)
|
||||
|
||||
# remove unwanted waste types from exclude list
|
||||
if exclude_types is not None:
|
||||
entries = list(
|
||||
filter(lambda e: e.type not in set(exclude_types), self._entries)
|
||||
)
|
||||
|
||||
# remove expired entries
|
||||
now = datetime.now().date()
|
||||
if include_today:
|
||||
entries = list(filter(lambda e: e.date >= now, entries))
|
||||
else:
|
||||
entries = list(filter(lambda e: e.date > now, entries))
|
||||
|
||||
# remove entries which are too far in the future (0 = today)
|
||||
if leadtime is not None:
|
||||
x = now + timedelta(days=leadtime)
|
||||
entries = list(filter(lambda e: e.date <= x, entries))
|
||||
|
||||
# ensure that entries are sorted by date
|
||||
entries.sort(key=lambda e: e.date)
|
||||
|
||||
# remove surplus entries
|
||||
if count is not None:
|
||||
entries = entries[:count]
|
||||
|
||||
return entries
|
||||
@@ -50,11 +50,12 @@ TEST_CASES = {
|
||||
"f_id_kommune": "2911",
|
||||
"f_id_strasse": "2374",
|
||||
},
|
||||
"Thalheim": {
|
||||
"AWB Limburg-Weilburg": {
|
||||
"key": "0ff491ffdf614d6f34870659c0c8d917",
|
||||
"f_id_kommune": 6031,
|
||||
"f_id_strasse": 621,
|
||||
"f_id_strasse_hnr": 872,
|
||||
"f_abfallarten": [27, 28, 17, 67],
|
||||
}
|
||||
}
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
@@ -142,10 +143,15 @@ class Source:
|
||||
ics_file = r.text
|
||||
|
||||
# Remove all lines starting with <b
|
||||
# This warning are caused for customers which use an extra radiobutton
|
||||
# list to add special waste types:
|
||||
# - AWB Limburg-Weilheim uses this list to select a "Sonderabfall <city>"
|
||||
# waste type. The warning could be removed by adding the extra config
|
||||
# option "f_abfallarten" with the following values [27, 28, 17, 67]
|
||||
html_warnings = re.findall("\<b.*",ics_file)
|
||||
if html_warnings:
|
||||
ics_file = re.sub("\<br.*|\<b.*", "\\r", ics_file)
|
||||
_LOGGER.warning("Html tags removed from ics file: " + ', '.join(html_warnings))
|
||||
#_LOGGER.warning("Html tags removed from ics file: " + ', '.join(html_warnings))
|
||||
|
||||
dates = self._ics.convert(ics_file)
|
||||
|
||||
|
||||
@@ -91,25 +91,26 @@ class Source:
|
||||
|
||||
soup = BeautifulSoup(r.text, features="html.parser")
|
||||
links = soup.find_all("a")
|
||||
ical_url = ""
|
||||
ical_urls = []
|
||||
for any_link in links:
|
||||
if " als iCal" in any_link.text:
|
||||
ical_url = any_link.get("href")
|
||||
|
||||
if "ical.html" not in ical_url:
|
||||
raise Exception("No ical Link in the result: " + str(links))
|
||||
|
||||
# Get the final data
|
||||
r = requests.get(ical_url, headers=HEADERS)
|
||||
if not r.ok:
|
||||
raise Exception(f"Error: failed to fetch url: {ical_url}")
|
||||
|
||||
# Parse ics file
|
||||
dates = self._ics.convert(r.text)
|
||||
# multiple links occur during year transition
|
||||
ical_urls.append(any_link.get("href"))
|
||||
|
||||
# Get the final data for all links
|
||||
entries = []
|
||||
for d in dates:
|
||||
entries.append(Collection(d[0], d[1]))
|
||||
for ical_url in ical_urls:
|
||||
r = requests.get(ical_url, headers=HEADERS)
|
||||
r.raise_for_status()
|
||||
|
||||
# Parse ics file
|
||||
try:
|
||||
dates = self._ics.convert(r.text)
|
||||
|
||||
for d in dates:
|
||||
entries.append(Collection(d[0], d[1]))
|
||||
except ValueError:
|
||||
pass # during year transition the ical for the next year may be empty
|
||||
return entries
|
||||
|
||||
def parse_level(self, response, level):
|
||||
|
||||
@@ -40,7 +40,7 @@ class Source:
|
||||
street_id = 0
|
||||
property_id = 0
|
||||
today = date.today()
|
||||
nextmonth = today + timedelta(30)
|
||||
nextmonth = today + timedelta(days=365)
|
||||
|
||||
# Retrieve suburbs
|
||||
r = requests.get(
|
||||
|
||||
@@ -45,8 +45,8 @@ class Source:
|
||||
addresses = r.json()
|
||||
|
||||
address_ids = [
|
||||
x for x in addresses["candidates"]
|
||||
if x["attributes"]["PAO_TEXT"].lower() == self._number.lower() or x["attributes"]["PAO_START_NUMBER"].lower() == self._number.lower()
|
||||
x for x in addresses["results"]
|
||||
if (x["LPI"].get('PAO_TEXT') and x["LPI"]["PAO_TEXT"].lower() == self._number.lower()) or (x["LPI"].get('PAO_START_NUMBER') and x["LPI"]["PAO_START_NUMBER"].lower() == self._number.lower())
|
||||
]
|
||||
|
||||
if len(address_ids) == 0:
|
||||
@@ -55,7 +55,7 @@ class Source:
|
||||
|
||||
q = str(API_URLS["collection"])
|
||||
r = requests.post(q, json={
|
||||
"uprn": address_ids[0]["attributes"]["UPRN"], "usrn": address_ids[0]["attributes"]["USRN"]})
|
||||
"uprn": address_ids[0]["LPI"]["UPRN"], "usrn": address_ids[0]["LPI"]["USRN"]})
|
||||
r.raise_for_status()
|
||||
|
||||
collectionsRaw = json.loads(r.json()["dates"])
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
import logging
|
||||
import requests
|
||||
from datetime import datetime
|
||||
from xml.dom.minidom import parseString
|
||||
from waste_collection_schedule import Collection # type: ignore[attr-defined]
|
||||
from waste_collection_schedule.service.ICS import ICS
|
||||
|
||||
@@ -8,17 +10,33 @@ DESCRIPTION = "Source for Umweltprofis"
|
||||
URL = "https://www.umweltprofis.at"
|
||||
TEST_CASES = {
|
||||
"Ebensee": {"url": "https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=KXX_K0bIXDdk0NrTkk3xWqLM9-bsNgIVBE6FMXDObTqxmp9S39nIqwhf9LTIAX9shrlpfCYU7TG_8pS9NjkAJnM_ruQ1SYm3V9YXVRfLRws1"},
|
||||
"Rohrbach": {"xmlurl": "https://data.umweltprofis.at/opendata/AppointmentService/AppointmentService.asmx/GetTermineForLocationSecured?Key=TEMPKeyabvvMKVCic0cMcmsTEMPKey&StreetNr=118213&HouseNr=Alle&intervall=Alle"},
|
||||
}
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
def getText(element):
|
||||
s = ""
|
||||
for e in element.childNodes:
|
||||
if e.nodeType == e.TEXT_NODE:
|
||||
s += e.nodeValue
|
||||
return s
|
||||
|
||||
class Source:
|
||||
def __init__(self, url):
|
||||
def __init__(self, url=None, xmlurl=None):
|
||||
self._url = url
|
||||
self._xmlurl = xmlurl
|
||||
self._ics = ICS()
|
||||
if url is None and xmlurl is None:
|
||||
raise Exception("either url or xmlurl needs to be specified")
|
||||
|
||||
def fetch(self):
|
||||
if self._url is not None:
|
||||
return self.fetch_ics()
|
||||
elif self._xmlurl is not None:
|
||||
return self.fetch_xml()
|
||||
|
||||
def fetch_ics(self):
|
||||
r = requests.get(self._url)
|
||||
if r.status_code != 200:
|
||||
_LOGGER.error("Error querying calendar data")
|
||||
@@ -32,3 +50,18 @@ class Source:
|
||||
for d in dates:
|
||||
entries.append(Collection(d[0], d[1]))
|
||||
return entries
|
||||
|
||||
def fetch_xml(self):
|
||||
r = requests.get(self._xmlurl)
|
||||
r.raise_for_status()
|
||||
|
||||
doc = parseString(r.text)
|
||||
appointments = doc.getElementsByTagName("AppointmentEntry")
|
||||
|
||||
entries = []
|
||||
for a in appointments:
|
||||
date_string = getText(a.getElementsByTagName("Datum")[0])
|
||||
date = datetime.fromisoformat(date_string).date()
|
||||
waste_type = getText(a.getElementsByTagName("WasteType")[0])
|
||||
entries.append(Collection(date, waste_type))
|
||||
return entries
|
||||
|
||||
@@ -0,0 +1,118 @@
|
||||
import datetime
|
||||
import json
|
||||
import requests
|
||||
|
||||
from bs4 import BeautifulSoup
|
||||
from requests.utils import requote_uri
|
||||
from waste_collection_schedule import Collection
|
||||
|
||||
TITLE = "Horowhenua District Council"
|
||||
DESCRIPTION = "Source for Horowhenua District Council Rubbish & Recycling collection."
|
||||
URL = "https://www.horowhenua.govt.nz/"
|
||||
TEST_CASES = {
|
||||
"House-Shannon": {
|
||||
"post_code": "4821",
|
||||
"town": "Shannon",
|
||||
"street_name": "Bryce Street",
|
||||
"street_number": "55",
|
||||
},
|
||||
"House-Levin": {
|
||||
"post_code": "5510",
|
||||
"town": "Levin",
|
||||
"street_name": "McKenzie Street",
|
||||
"street_number": "15",
|
||||
},
|
||||
"Commercial-Foxton": {
|
||||
"post_code": "4814",
|
||||
"town": "Foxton",
|
||||
"street_name": "State Highway 1",
|
||||
"street_number": "18",
|
||||
},
|
||||
}
|
||||
|
||||
API_URLS = {
|
||||
"session":"https://www.horowhenua.govt.nz" ,
|
||||
"search": "https://www.horowhenua.govt.nz/api/v1/myarea/search?keywords={}",
|
||||
"schedule": "https://www.horowhenua.govt.nz/ocapi/Public/myarea/wasteservices?geolocationid={}&ocsvclang=en-AU",
|
||||
}
|
||||
|
||||
HEADERS = {
|
||||
"user-agent": "Mozilla/5.0",
|
||||
}
|
||||
|
||||
ICON_MAP = {
|
||||
"Rubbish": "mdi:trash-can",
|
||||
"Recycling": "mdi:recycle",
|
||||
}
|
||||
|
||||
# _LOGGER = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Source:
|
||||
def __init__(
|
||||
self, post_code: str, town: str, street_name: str, street_number: str
|
||||
):
|
||||
self.post_code = post_code
|
||||
self.town = town.upper()
|
||||
self.street_name = street_name
|
||||
self.street_number = street_number
|
||||
|
||||
def fetch(self):
|
||||
|
||||
locationId = 0
|
||||
|
||||
# 'collection' api call seems to require an ASP.Net_sessionID, so obtain the relevant cookie
|
||||
s = requests.Session()
|
||||
q = requote_uri(str(API_URLS["session"]))
|
||||
r0 = s.get(q, headers = HEADERS)
|
||||
|
||||
# Do initial address search
|
||||
address = "{} {} {} {}".format(self.street_number, self.street_name, self.town, self.post_code)
|
||||
q = requote_uri(str(API_URLS["search"]).format(address))
|
||||
r1 = s.get(q, headers = HEADERS)
|
||||
data = json.loads(r1.text)
|
||||
|
||||
# Find the geolocation for the address
|
||||
for item in data["Items"]:
|
||||
normalized_input = Source.normalize_address(address)
|
||||
normalized_response = Source.normalize_address(item['AddressSingleLine'])
|
||||
if normalized_input in normalized_response:
|
||||
locationId = item["Id"]
|
||||
break
|
||||
|
||||
if locationId == 0:
|
||||
return []
|
||||
|
||||
# Retrieve the upcoming collections for location
|
||||
q = requote_uri(str(API_URLS["schedule"]).format(locationId))
|
||||
r2 = s.get(q, headers = HEADERS)
|
||||
data = json.loads(r2.text)
|
||||
responseContent = data["responseContent"]
|
||||
|
||||
soup = BeautifulSoup(responseContent, "html.parser")
|
||||
services = soup.find_all("article")
|
||||
|
||||
entries = []
|
||||
|
||||
for item in services:
|
||||
waste_type = item.find('h3').text
|
||||
date = datetime.datetime.strptime(item.find('div', {'class': 'next-service'}).text.strip(), "%a %d/%m/%Y").date()
|
||||
entries.append(
|
||||
Collection(
|
||||
date = date,
|
||||
t=waste_type, # api returns Recycling, Rubbish
|
||||
icon=ICON_MAP.get(waste_type, "mdi:trash-can"),
|
||||
)
|
||||
)
|
||||
|
||||
return entries
|
||||
|
||||
@staticmethod
|
||||
def normalize_address(address_str):
|
||||
# Remove leading/trailing whitespace, capitalize
|
||||
address_str = address_str.strip().upper()
|
||||
# Replace any multiplewhite space characters with a single space
|
||||
address_str = " ".join(address_str.split())
|
||||
|
||||
return address_str
|
||||
|
||||
@@ -0,0 +1,90 @@
|
||||
import logging
|
||||
import requests
|
||||
import time
|
||||
|
||||
from datetime import datetime
|
||||
from waste_collection_schedule import Collection
|
||||
|
||||
TITLE = 'www.kingston.gov.uk'
|
||||
DESCRIPTION = (
|
||||
'Source for waste collection services for The Royal Borough of Kingston Council'
|
||||
)
|
||||
URL = 'https://kingston-self.achieveservice.com/service/in_my_area?displaymode=collections'
|
||||
|
||||
|
||||
HEADERS = {
|
||||
"user-agent": "Mozilla/5.0",
|
||||
}
|
||||
|
||||
COOKIES = {
|
||||
|
||||
}
|
||||
|
||||
TEST_CASES = {
|
||||
"Blagdon Road - number" : {"uprn": 100021772910},
|
||||
"Blagdon Road - string" : {"uprn": "100021772910"},
|
||||
}
|
||||
|
||||
API_URLS = {
|
||||
'session': 'https://kingston-self.achieveservice.com/service/In_my_Area_Results?uprn=100021772910&displaymode=collections&altVal=',
|
||||
'auth': 'https://kingston-self.achieveservice.com/authapi/isauthenticated?uri=https%253A%252F%252Fkingston-self.achieveservice.com%252Fservice%252FIn_my_Area_Results%253Fuprn%253D100021772910%2526displaymode%253Dcollections%2526altVal%253D&hostname=kingston-self.achieveservice.com&withCredentials=true',
|
||||
'schedule': 'https://kingston-self.achieveservice.com/apibroker/runLookup?id=601a61f9a3188&repeat_against=&noRetry=true&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&'
|
||||
}
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
class Source:
|
||||
def __init__(self, uprn: str):
|
||||
self._uprn = str(uprn)
|
||||
def fetch(self):
|
||||
s = requests.Session()
|
||||
|
||||
#This request sets up the cookies
|
||||
r0 = s.get(API_URLS['session'], headers=HEADERS)
|
||||
r0.raise_for_status()
|
||||
|
||||
#This request gets the session key from the PHPSESSID (in the cookies)
|
||||
authRequest = s.get(API_URLS['auth'], headers=HEADERS)
|
||||
authData = authRequest.json()
|
||||
sessionKey = authData['auth-session']
|
||||
now = time.time_ns() // 1_000_000
|
||||
|
||||
#now query using the uprn
|
||||
payload = { "formValues": { "Section 1": { "UPRN_FromUrl": { "value": self._uprn }, "borough_code": { "value": "RBK" }, "show_wasteCollection": { "value": "1" }, "echo_borough": { "value": "RBK" }, "echo_uprn": { "value": self._uprn } } } }
|
||||
|
||||
scheduleRequest = s.post(API_URLS['schedule'] + '&_' + str(now) + '&sid=' + sessionKey , headers=HEADERS, json=payload)
|
||||
data = scheduleRequest.json()['integration']['transformed']['rows_data']['0']
|
||||
print(data)
|
||||
entries = []
|
||||
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(data['echo_refuse_next_date'], '%Y-%m-%d %H:%M:%S').date(),
|
||||
t = 'refuse bin',
|
||||
icon = 'mdi:trash-can'
|
||||
))
|
||||
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(data['echo_food_waste_next_date'], '%Y-%m-%d %H:%M:%S').date(),
|
||||
t = 'food waste bin',
|
||||
icon = 'mdi:trash-can'
|
||||
))
|
||||
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(data['echo_paper_and_card_next_date'], '%Y-%m-%d %H:%M:%S').date(),
|
||||
t = 'paper and card recycling bin',
|
||||
icon = 'mdi:recycle'
|
||||
))
|
||||
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(data['echo_mixed_recycling_next_date'], '%Y-%m-%d %H:%M:%S').date(),
|
||||
t = 'mixed recycling bin',
|
||||
icon = 'mdi:recycle'
|
||||
))
|
||||
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(data['echo_garden_waste_next_date'], '%Y-%m-%d %H:%M:%S').date(),
|
||||
t = 'garden waste bin',
|
||||
icon = 'mdi:leaf'
|
||||
))
|
||||
|
||||
return entries
|
||||
@@ -0,0 +1,91 @@
|
||||
import logging
|
||||
import requests
|
||||
import time
|
||||
|
||||
from datetime import datetime
|
||||
from waste_collection_schedule import Collection
|
||||
|
||||
TITLE = 'middlesbrough.gov.uk'
|
||||
DESCRIPTION = (
|
||||
'Source for waste collection services for Middlesbrough Council'
|
||||
)
|
||||
URL = 'https://www.middlesbrough.gov.uk/bin-collection-dates'
|
||||
|
||||
|
||||
HEADERS = {
|
||||
"user-agent": "Mozilla/5.0",
|
||||
}
|
||||
|
||||
COOKIES = {
|
||||
|
||||
}
|
||||
|
||||
TEST_CASES = {
|
||||
"Tollesby Road - number" : {"uprn": 100110140843},
|
||||
"Tollesby Road - string" : {"uprn": "100110140843"},
|
||||
"Victoria Road - number" : {"uprn": 100110774949},
|
||||
"Victoria Road - string" : {"uprn": "100110774949"},
|
||||
}
|
||||
|
||||
API_URLS = {
|
||||
'session': 'https://my.middlesbrough.gov.uk/en/AchieveForms/?mode=fill&consentMessage=yes&form_uri=sandbox-publish://AF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95/AF-Stage-bfbb065e-0dda-4ae6-933d-9e6b91cc56ce/definition.json&process=1&process_uri=sandbox-processes://AF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95&process_id=AF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95&noLoginPrompt=1',
|
||||
'auth': 'https://my.middlesbrough.gov.uk/authapi/isauthenticated?uri=https%253A%252F%252Fmy.middlesbrough.gov.uk%252Fen%252FAchieveForms%252F%253Fmode%253Dfill%2526consentMessage%253Dyes%2526form_uri%253Dsandbox-publish%253A%252F%252FAF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95%252FAF-Stage-bfbb065e-0dda-4ae6-933d-9e6b91cc56ce%252Fdefinition.json%2526process%253D1%2526process_uri%253Dsandbox-processes%253A%252F%252FAF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95%2526process_id%253DAF-Process-37a44b6e-cbef-499a-9fb9-d8d507613c95%2526noLoginPrompt%253D1&hostname=my.middlesbrough.gov.uk&withCredentials=true',
|
||||
'schedule': 'https://my.middlesbrough.gov.uk/apibroker/runLookup?id=5d78f40439054&repeat_against=&noRetry=true&getOnlyTokens=undefined&log_id=&app_name=AF-Renderer::Self&'
|
||||
}
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
class Source:
|
||||
def __init__(self, uprn: str):
|
||||
self._uprn = str(uprn)
|
||||
def fetch(self):
|
||||
s = requests.Session()
|
||||
|
||||
#This request sets up the cookies
|
||||
r0 = s.get(API_URLS['session'], headers=HEADERS)
|
||||
r0.raise_for_status()
|
||||
|
||||
#This request gets the session key from the PHPSESSID (in the cookies)
|
||||
authRequest = s.get(API_URLS['auth'], headers=HEADERS)
|
||||
authData = authRequest.json()
|
||||
sessionKey = authData['auth-session']
|
||||
now = time.time_ns() // 1_000_000
|
||||
|
||||
#now query using the uprn
|
||||
payload = {
|
||||
"formValues": { "Find My Collection Dates": { "uprn_search": { "value": self._uprn } } }
|
||||
}
|
||||
scheduleRequest = s.post(API_URLS['schedule'] + '&_' + str(now) + '&sid=' + sessionKey , headers=HEADERS, json=payload)
|
||||
data = scheduleRequest.json()['integration']['transformed']['rows_data']['0']
|
||||
|
||||
refuseDates = data['Refuse'].split('<br />')
|
||||
recyclingDates = data['Recycling'].split('<br />')
|
||||
greenDates = data['Green'].split('<br />')
|
||||
|
||||
entries = []
|
||||
|
||||
for date in refuseDates:
|
||||
if len(date) > 0:
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(date, '%d/%m/%Y').date(),
|
||||
t = 'refuse bin',
|
||||
icon = 'mdi:trash-can'
|
||||
))
|
||||
|
||||
for date in recyclingDates:
|
||||
if len(date) > 0:
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(date, '%d/%m/%Y').date(),
|
||||
t = 'recycling bin',
|
||||
icon = 'mdi:recycle'
|
||||
))
|
||||
|
||||
for date in greenDates:
|
||||
if len(date) > 0:
|
||||
entries.append(Collection(
|
||||
date = datetime.strptime(date, '%d/%m/%Y').date(),
|
||||
t = 'green bin',
|
||||
icon = 'mdi:leaf'
|
||||
))
|
||||
|
||||
return entries
|
||||
@@ -0,0 +1,84 @@
|
||||
import requests
|
||||
import datetime
|
||||
|
||||
from waste_collection_schedule import Collection
|
||||
from waste_collection_schedule.service.ICS import ICS
|
||||
|
||||
TITLE = "Städteservice"
|
||||
DESCRIPTION = "Städteservice Raunheim Rüsselsheim"
|
||||
URL = "https://www.staedteservice.de"
|
||||
|
||||
TEST_CASES = {
|
||||
"Rüsselsheim": {
|
||||
"city": "Rüsselsheim",
|
||||
"street_number": "411"
|
||||
},
|
||||
"Raunheim": {
|
||||
"city": "Raunheim",
|
||||
"street_number": "565"
|
||||
},
|
||||
}
|
||||
|
||||
BASE_URL = "https://www.staedteservice.de/abfallkalender"
|
||||
|
||||
CITY_CODE_MAP = {
|
||||
"Rüsselsheim": 1,
|
||||
"Raunheim": 2
|
||||
}
|
||||
|
||||
class Source:
|
||||
def __init__(self, city, street_number):
|
||||
self.city = str(city)
|
||||
self.city_code = CITY_CODE_MAP[city]
|
||||
self.street_number = str(street_number)
|
||||
self._ics = ICS()
|
||||
|
||||
def fetch(self) -> list:
|
||||
currentDateTime = datetime.datetime.now()
|
||||
year = currentDateTime.year
|
||||
month = currentDateTime.month
|
||||
|
||||
session = requests.Session()
|
||||
|
||||
dates = self.get_dates(session, year, month)
|
||||
|
||||
entries = []
|
||||
for d in dates:
|
||||
entries.append(Collection(d[0], d[1]))
|
||||
|
||||
return entries
|
||||
|
||||
def get_dates(self, session: requests.Session, year: int, month: int) -> list:
|
||||
current_calendar = self.get_calendar_from_site(session, year)
|
||||
calendar = self.fix_trigger(current_calendar)
|
||||
dates = self._ics.convert(calendar)
|
||||
|
||||
# in december the calendar for the next year is available
|
||||
if month == 12:
|
||||
year += 1
|
||||
next_calendar = self.get_calendar_from_site(session, year)
|
||||
calendar = self.fix_trigger(next_calendar)
|
||||
dates += self._ics.convert(calendar)
|
||||
|
||||
return dates
|
||||
|
||||
def get_calendar_from_site(self, session: requests.Session, year: int) -> str:
|
||||
# example format: https://www.staedteservice.de/abfallkalender_1_477_2023.ics
|
||||
URL = f"{BASE_URL}_{self.city_code}_{self.street_number}_{str(year)}.ics"
|
||||
|
||||
r = session.get(URL)
|
||||
r.raise_for_status()
|
||||
r.encoding = "utf-8" # enshure it is the right encoding
|
||||
|
||||
return r.text
|
||||
|
||||
def fix_trigger(self, calendar: str) -> str:
|
||||
# the "TRIGGER" is set to "-PT1D" in the ical file
|
||||
# the integration failes with following log output: ValueError: Invalid iCalendar duration: -PT1D
|
||||
# according to this site https://www.kanzaki.com/docs/ical/duration-t.html
|
||||
# the "T" should come after the dur-day if there is a dur-time specified and never before dur-day
|
||||
# because there is no dur-time specified we can just ignore the "T" in the TRIGGER
|
||||
|
||||
fixed_calendar = calendar.replace("-PT1D", "-P1D")
|
||||
|
||||
return fixed_calendar
|
||||
@@ -0,0 +1,54 @@
|
||||
from datetime import datetime
|
||||
from urllib.parse import quote as urlquote
|
||||
|
||||
import requests
|
||||
from waste_collection_schedule import Collection
|
||||
|
||||
TITLE = "Tewkesbury Borough Council Waste and Recycling"
|
||||
DESCRIPTION = "Home waste collection schedule for Tewkesbury Borough Council"
|
||||
URL = "https://www.tewkesbury.gov.uk/waste-and-recycling"
|
||||
TEST_CASES = {
|
||||
"Council Office": {"postcode": "GL20 5TT"},
|
||||
"Council Office No Spaces": {"postcode": "GL205TT"},
|
||||
}
|
||||
|
||||
API_URL = "https://api-2.tewkesbury.gov.uk/general/rounds/%s/nextCollection"
|
||||
|
||||
ICONS = {
|
||||
"Refuse": "mdi:trash-can",
|
||||
"Recycling": "mdi:recycle",
|
||||
"Garden": "mdi:leaf",
|
||||
"Food": "mdi:silverware-fork-knife",
|
||||
}
|
||||
|
||||
|
||||
class Source:
|
||||
def __init__(self, postcode: str = None):
|
||||
self._postcode = postcode
|
||||
|
||||
def fetch(self):
|
||||
if self._postcode is None:
|
||||
raise Exception("postcode not set")
|
||||
|
||||
encoded_postcode = urlquote(self._postcode)
|
||||
request_url = API_URL % encoded_postcode
|
||||
response = requests.get(request_url)
|
||||
|
||||
response.raise_for_status()
|
||||
data = response.json()
|
||||
|
||||
entries = []
|
||||
|
||||
if data["status"] == "OK":
|
||||
schedule = data["body"]
|
||||
for schedule_entry in schedule:
|
||||
entries.append(
|
||||
Collection(
|
||||
date=datetime.strptime(
|
||||
schedule_entry["NextCollection"], "%Y-%m-%d").date(),
|
||||
t=schedule_entry["collectionType"],
|
||||
icon=ICONS.get(schedule_entry["collectionType"])
|
||||
)
|
||||
)
|
||||
|
||||
return entries
|
||||
@@ -0,0 +1,87 @@
|
||||
from datetime import datetime
|
||||
|
||||
import requests
|
||||
from waste_collection_schedule import Collection # type: ignore[attr-defined]
|
||||
from waste_collection_schedule.service.ICS import ICS
|
||||
|
||||
TITLE = "Abfallkalender Wermelskirchen"
|
||||
DESCRIPTION = "Source for Abfallabholung Wermelskirchen, Germany"
|
||||
URL = "https://www.wermelskirchen.de/rathaus/buergerservice/formulare-a-z/abfallkalender-online/"
|
||||
|
||||
TEST_CASES = {
|
||||
"Rathaus": {"street": "Telegrafenstraße", "house_number": "29"},
|
||||
"Krankenhaus": {"street": "Königstraße", "house_number": "100"},
|
||||
"Mehrzweckhalle": {"street": "An der Mehrzweckhalle", "house_number": "1"},
|
||||
}
|
||||
|
||||
INFOS = {
|
||||
"Restabfall 2-woechentlich": {
|
||||
"icon": "mdi:trash-can",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/1/b/csm_Restmuell_6b2b32c774.png",
|
||||
},
|
||||
"Restabfall 4-woechentlich": {
|
||||
"icon": "mdi:trash-can",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/1/b/csm_Restmuell_6b2b32c774.png",
|
||||
},
|
||||
"Restabfall 6-woechentlich": {
|
||||
"icon": "mdi:trash-can",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/1/b/csm_Restmuell_6b2b32c774.png",
|
||||
},
|
||||
"Gelber Sack": {
|
||||
"icon": "mdi:recycle-variant",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/f/4/csm_GelbeTonne_24ffc276b2.png",
|
||||
},
|
||||
"Papier": {
|
||||
"icon": "mdi:package-variant",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/2/3/csm_Papiertonne_919ed3b5da.png",
|
||||
},
|
||||
"Biotonne": {
|
||||
"icon": "mdi:leaf",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/6/f/csm_Biotonne_wk_ae1b0e61aa.png",
|
||||
},
|
||||
"Schadstoffsammlung": {
|
||||
"icon": "mdi:bottle-tonic-skull",
|
||||
"image": "https://abfallkalender.citkomm.de/fileadmin/_processed_/4/2/csm_sondermuell_62f5701a7b.png",
|
||||
},
|
||||
"Weihnachtsbaum": {"icon": "mdi:pine-tree", "image": ""},
|
||||
}
|
||||
|
||||
|
||||
class Source:
|
||||
def __init__(self, street, house_number):
|
||||
self._street = street
|
||||
self._house_number = str(house_number)
|
||||
self._ics = ICS()
|
||||
|
||||
def fetch(self):
|
||||
# the url contains the current year, but this doesn't really seems to matter at least for the ical, since the result is always the same
|
||||
# still replace it for compatibility sake
|
||||
now = datetime.now()
|
||||
url = f"https://abfallkalender.citkomm.de/wermelskirchen/abfallkalender-{str(now.year)}/ics/FrontendIcs.html"
|
||||
params = {
|
||||
"tx_citkoabfall_abfallkalender[strasse]": self._street,
|
||||
"tx_citkoabfall_abfallkalender[hausnummer]": self._house_number,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][0]": 86,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][1]": 85,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][2]": 84,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][3]": 82,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][4]": 81,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][5]": 80,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][6]": 79,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][7]": 76,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][8]": 75,
|
||||
"tx_citkoabfall_abfallkalender[abfallarten][9]": 74,
|
||||
}
|
||||
r = requests.get(url, params=params)
|
||||
r.raise_for_status()
|
||||
|
||||
r.encoding = "utf-8"
|
||||
dates = self._ics.convert(r.text)
|
||||
|
||||
entries = []
|
||||
for d in dates:
|
||||
info = INFOS.get(d[1], {"icon": "mdi:trash-can", "image": ""})
|
||||
entries.append(
|
||||
Collection(d[0], d[1], picture=info["image"], icon=info["icon"])
|
||||
)
|
||||
return entries
|
||||
@@ -1,6 +1,6 @@
|
||||
import requests
|
||||
from datetime import date, datetime
|
||||
|
||||
from datetime import datetime
|
||||
import requests
|
||||
from bs4 import BeautifulSoup
|
||||
from waste_collection_schedule import Collection
|
||||
|
||||
@@ -13,10 +13,20 @@ TEST_CASES = {
|
||||
SEARCH_URLS = {
|
||||
"collection_search": "https://ilforms.wiltshire.gov.uk/wastecollectiondays/collectionlist"
|
||||
}
|
||||
COLLECTIONS = {"Household waste",
|
||||
"Mixed dry recycling (blue lidded bin)", # some addresses may not have a black box collection
|
||||
"Mixed dry recycling (blue lidded bin) and glass (black box or basket)"
|
||||
}
|
||||
COLLECTIONS = {
|
||||
"Household waste",
|
||||
"Mixed dry recycling (blue lidded bin)", # some addresses may not have a black box collection
|
||||
"Mixed dry recycling (blue lidded bin) and glass (black box or basket)",
|
||||
"Chargeable garden waste", # some addresses also have a chargeable garden waste collection
|
||||
}
|
||||
|
||||
|
||||
def add_month(date_):
|
||||
if date_.month < 12:
|
||||
date_ = date_.replace(month=date_.month + 1)
|
||||
else:
|
||||
date_ = date_.replace(year=date_.year + 1, month=1)
|
||||
return date_
|
||||
|
||||
|
||||
class Source:
|
||||
@@ -27,26 +37,37 @@ class Source:
|
||||
self._postcode = postcode
|
||||
|
||||
def fetch(self):
|
||||
fetch_month = date.today().replace(day=1)
|
||||
|
||||
entries = []
|
||||
session = requests.Session()
|
||||
for i in range(0, 7):
|
||||
entries.extend(self.fetch_month(fetch_month))
|
||||
fetch_month = add_month(fetch_month)
|
||||
|
||||
return entries
|
||||
|
||||
def fetch_month(self, fetch_month):
|
||||
args = {
|
||||
"Postcode": self._postcode,
|
||||
"Uprn": self._uprn,
|
||||
"Month": fetch_month.month,
|
||||
"Year": fetch_month.year,
|
||||
}
|
||||
r = session.post(SEARCH_URLS["collection_search"], params=args)
|
||||
r.raise_for_status()
|
||||
soup = BeautifulSoup(r.text, 'html.parser')
|
||||
for collection in COLLECTIONS:
|
||||
for tag in soup.find_all(
|
||||
attrs={"data-original-title": collection}
|
||||
):
|
||||
|
||||
r = requests.post(SEARCH_URLS["collection_search"], params=args)
|
||||
r.raise_for_status()
|
||||
|
||||
soup = BeautifulSoup(r.text, "html.parser")
|
||||
|
||||
entries = []
|
||||
for collection in COLLECTIONS:
|
||||
for tag in soup.find_all(attrs={"data-original-title": collection}):
|
||||
entries.append(
|
||||
Collection(
|
||||
datetime.strptime(
|
||||
tag['data-original-datetext'], "%A %d %B, %Y").date(),
|
||||
tag["data-original-datetext"], "%A %d %B, %Y"
|
||||
).date(),
|
||||
collection,
|
||||
)
|
||||
)
|
||||
|
||||
return entries
|
||||
|
||||
@@ -1,13 +1,10 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
import datetime
|
||||
import importlib
|
||||
import itertools
|
||||
import logging
|
||||
import traceback
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from .collection import Collection, CollectionGroup
|
||||
from .collection import Collection
|
||||
|
||||
_LOGGER = logging.getLogger(__name__)
|
||||
|
||||
@@ -85,7 +82,7 @@ def customize_function(entry: Collection, customize: Dict[str, Customize]):
|
||||
return entry
|
||||
|
||||
|
||||
class Scraper:
|
||||
class SourceShell:
|
||||
def __init__(
|
||||
self,
|
||||
source,
|
||||
@@ -106,10 +103,6 @@ class Scraper:
|
||||
self._refreshtime = None
|
||||
self._entries: List[Collection] = []
|
||||
|
||||
@property
|
||||
def source(self):
|
||||
return self._source
|
||||
|
||||
@property
|
||||
def refreshtime(self):
|
||||
return self._refreshtime
|
||||
@@ -158,113 +151,31 @@ class Scraper:
|
||||
|
||||
self._entries = list(entries)
|
||||
|
||||
def get_types(self):
|
||||
"""Return set() of all collection types."""
|
||||
types = set()
|
||||
for e in self._entries:
|
||||
types.add(e.type)
|
||||
return types
|
||||
|
||||
def get_dedicated_calendar_types(self):
|
||||
"""Return set of waste types with a dedicated calendar."""
|
||||
types = set()
|
||||
|
||||
for key, customize in self._customize.items():
|
||||
if customize.show and customize.use_dedicated_calendar:
|
||||
types.add(key)
|
||||
|
||||
return types or None
|
||||
|
||||
def get_global_calendar_types(self):
|
||||
types = set()
|
||||
|
||||
for key, customize in self._customize.items():
|
||||
if customize.show and not customize.use_dedicated_calendar:
|
||||
types.add(key)
|
||||
|
||||
return types or None
|
||||
|
||||
def get_upcoming(self, count=None, leadtime=None, types=None, include_today=False):
|
||||
"""Return list of all entries, limited by count and/or leadtime.
|
||||
|
||||
Keyword arguments:
|
||||
count -- limits the number of returned entries (default=10)
|
||||
leadtime -- limits the timespan in days of returned entries (default=7, 0 = today)
|
||||
"""
|
||||
return self._filter(
|
||||
self._entries,
|
||||
count=count,
|
||||
leadtime=leadtime,
|
||||
types=types,
|
||||
include_today=include_today,
|
||||
)
|
||||
|
||||
def get_upcoming_group_by_day(
|
||||
self, count=None, leadtime=None, types=None, include_today=False
|
||||
):
|
||||
"""Return list of all entries, grouped by day, limited by count and/or leadtime."""
|
||||
entries = []
|
||||
|
||||
iterator = itertools.groupby(
|
||||
self._filter(
|
||||
self._entries,
|
||||
leadtime=leadtime,
|
||||
types=types,
|
||||
include_today=include_today,
|
||||
),
|
||||
lambda e: e.date,
|
||||
)
|
||||
|
||||
for key, group in iterator:
|
||||
entries.append(CollectionGroup.create(list(group)))
|
||||
if count is not None:
|
||||
entries = entries[:count]
|
||||
|
||||
return entries
|
||||
return types
|
||||
|
||||
def get_calendar_title_for_type(self, type):
|
||||
"""Return calendar title for waste type (used for dedicated calendars)."""
|
||||
c = self._customize.get(type)
|
||||
if c is not None and c.dedicated_calendar_title:
|
||||
return c.dedicated_calendar_title
|
||||
|
||||
return self.calendar_title
|
||||
return self.get_collection_type_name(type)
|
||||
|
||||
def get_collection_type(self, type):
|
||||
def get_collection_type_name(self, type):
|
||||
c = self._customize.get(type)
|
||||
if c is not None and c.alias:
|
||||
return c.alias
|
||||
|
||||
return type
|
||||
|
||||
def _filter(
|
||||
self, entries, count=None, leadtime=None, types=None, include_today=False
|
||||
):
|
||||
# remove unwanted waste types
|
||||
if types is not None:
|
||||
# generate set
|
||||
types_set = {t for t in types}
|
||||
entries = list(filter(lambda e: e.type in types_set, self._entries))
|
||||
|
||||
# remove expired entries
|
||||
now = datetime.datetime.now().date()
|
||||
if include_today:
|
||||
entries = list(filter(lambda e: e.date >= now, entries))
|
||||
else:
|
||||
entries = list(filter(lambda e: e.date > now, entries))
|
||||
|
||||
# remove entries which are too far in the future (0 = today)
|
||||
if leadtime is not None:
|
||||
x = now + datetime.timedelta(days=leadtime)
|
||||
entries = list(filter(lambda e: e.date <= x, entries))
|
||||
|
||||
# ensure that entries are sorted by date
|
||||
entries.sort(key=lambda e: e.date)
|
||||
|
||||
# remove surplus entries
|
||||
if count is not None:
|
||||
entries = entries[:count]
|
||||
|
||||
return entries
|
||||
|
||||
@staticmethod
|
||||
def create(
|
||||
source_name: str,
|
||||
@@ -273,9 +184,6 @@ class Scraper:
|
||||
calendar_title: Optional[str] = None,
|
||||
):
|
||||
# load source module
|
||||
|
||||
# for home-assistant, use the last 3 folders, e.g. custom_component/wave_collection_schedule/waste_collection_schedule
|
||||
# otherwise, only use waste_collection_schedule
|
||||
try:
|
||||
source_module = importlib.import_module(
|
||||
f"waste_collection_schedule.source.{source_name}"
|
||||
@@ -287,19 +195,19 @@ class Scraper:
|
||||
# create source
|
||||
source = source_module.Source(**source_args) # type: ignore
|
||||
|
||||
# create scraper
|
||||
g = Scraper(
|
||||
# create source shell
|
||||
g = SourceShell(
|
||||
source=source,
|
||||
customize=customize,
|
||||
title=source_module.TITLE, # type: ignore[attr-defined]
|
||||
description=source_module.DESCRIPTION, # type: ignore[attr-defined]
|
||||
url=source_module.URL, # type: ignore[attr-defined]
|
||||
calendar_title=calendar_title,
|
||||
unique_id=calc_unique_scraper_id(source_name, source_args),
|
||||
unique_id=calc_unique_source_id(source_name, source_args),
|
||||
)
|
||||
|
||||
return g
|
||||
|
||||
|
||||
def calc_unique_scraper_id(source_name, source_args):
|
||||
def calc_unique_source_id(source_name, source_args):
|
||||
return source_name + str(sorted(source_args.items()))
|
||||
@@ -4,20 +4,21 @@ Support for schedules provided by [Umweltprofis.at](https://www.umweltprofis.at)
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
You need to generate your personal iCal Link before you can start using this source. Go to [https://data.umweltprofis.at/opendata/AppointmentService/index.aspx](https://data.umweltprofis.at/opendata/AppointmentService/index.aspx) and fill out the form. At the end, you can generate an iCal link. Copy this link and paste it to configuration.yaml as seen below.
|
||||
You need to generate your personal XML link before you can start using this source. Go to [https://data.umweltprofis.at/opendata/AppointmentService/index.aspx](https://data.umweltprofis.at/opendata/AppointmentService/index.aspx) and fill out the form. At the end
|
||||
at step 6 you get a link to a XML file. Copy this link and paste it to configuration.yaml as seen below.
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: data_umweltprofis_at
|
||||
args:
|
||||
url: URL
|
||||
xmlurl: URL
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**URL**<br>
|
||||
*(url) (required)*
|
||||
**xmlurl**<br>
|
||||
*(URL) (required)*
|
||||
|
||||
## Example
|
||||
|
||||
@@ -26,5 +27,5 @@ waste_collection_schedule:
|
||||
sources:
|
||||
- name: data_umweltprofis_at
|
||||
args:
|
||||
url: https://data.umweltprofis.at/OpenData/AppointmentService/AppointmentService.asmx/GetIcalWastePickupCalendar?key=xxx
|
||||
xmlurl: https://data.umweltprofis.at/opendata/AppointmentService/AppointmentService.asmx/GetTermineForLocationSecured?Key=TEMPKeyabvvMKVCic0cMcmsTEMPKey&StreetNr=124972&HouseNr=Alle&intervall=Alle
|
||||
```
|
||||
48
doc/source/horowhenua_govt_nz.md
Normal file
48
doc/source/horowhenua_govt_nz.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Horowhenua District Council
|
||||
|
||||
Support for schedules provided by [Horowhenua District Council Kerbside Rubbish & Recycling Services](https://www.horowhenua.govt.nz/Services/Home-Property/Rubbish-Recycling/Kerbside-Rubbish-Recycling-Services).
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: horowhenua_govt_nz
|
||||
args:
|
||||
post_code: POST_CODE
|
||||
town: TOWN
|
||||
street_name: STREET_NAME
|
||||
street_number: STREET_NUMBER
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**post_code**<br>
|
||||
*(string) (required)*
|
||||
|
||||
**town**<br>
|
||||
*(string) (required)*
|
||||
|
||||
**street_name**<br>
|
||||
*(string) (required)*
|
||||
|
||||
**street_number**<br>
|
||||
*(string) (required)*
|
||||
|
||||
## Example
|
||||
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: horowhenua_govt_nz
|
||||
args:
|
||||
post_code: 4814
|
||||
town: Foxton
|
||||
street_name: State Highway 1
|
||||
street_number: 18
|
||||
```
|
||||
|
||||
## How to get the source arguments
|
||||
|
||||
Visit the [Horowhenua District Council Waste and Recycling - Check my rubbish and recycling collection dates](https://www.horowhenua.govt.nz/Services/Home-Property/Rubbish-Recycling/Check-my-rubbish-and-recycling-collection-date) page and search for your address. The arguments should exactly match the results shown for Post Code, Town, Street and number portion of the Property.
|
||||
@@ -29,7 +29,7 @@ waste_collection_schedule:
|
||||
- name: infeo_at
|
||||
args:
|
||||
customer: bogenschütz
|
||||
zone: "Rottenburg (Bezirk 2; Baisingen; Ergenzingen)"
|
||||
zone: "Dettenhausen"
|
||||
```
|
||||
|
||||
## How to get the source arguments
|
||||
@@ -46,8 +46,36 @@ If your provider is also using infeo.at you can just try to use the name of your
|
||||
|
||||
#### Bogenschuetz-Entsorgung.de
|
||||
- Go to your calendar at `https://www.bogenschuetz-entsorgung.de/images/wastecal/index-zone.html`.
|
||||
- Leave the year as it is and select the zone of your choice.
|
||||
- Copy the whole zone name and put it into `zone` of your configuration.
|
||||
- Browse through all the available years and check the naming of your desired zone and try to find what makes it unique.
|
||||
- Put this unique string into `zone` of your configuration.
|
||||
- It will just be checked if the calendar contains an entry that contains your keyword `zone`.
|
||||
|
||||
##### Example 1: Dettenhausen
|
||||
- For 2022 it is: `Dettenhausen, Tübingen (Bebenhausen; Lustnau)`
|
||||
- For 2023 it is: `Dettenhausen`
|
||||
- Use `Dettenhausen` as zone
|
||||
|
||||
##### Example 2: Ofterdingen
|
||||
- For 2022 it is: `Dußlingen, Ofterdingen`
|
||||
- For 2023 it is: `Ofterdingen`
|
||||
- Use `Ofterdingen` as zone
|
||||
|
||||
##### Example 3: Felldorf
|
||||
- For 2022 it is: `Rottenburg (Bad Niedernau; Bieringen; Eckenweiler; Frommenhausen; Obernau; Schwalldorf), Starzach (Bierlingen; Börstingen; Felldorf; Sulzau; Wachendorf)`
|
||||
- For 2023 it is: `Starzach (Bierlingen; Börstingen; Felldorf; Sulzau; Wachendorf)`
|
||||
- Use `Felldorf` as zone
|
||||
|
||||
##### Example 4: Tübingen Innenstadt
|
||||
- For 2022 it is: `Tübingen (Bezirk 4 - Innenstadt)`
|
||||
- For 2023 it is: `Tübingen (Bezirk 4 - Innenstadt)`
|
||||
- Use `Innenstadt` as zone
|
||||
- Do NOT use `Tübingen` as it is used multiple times!
|
||||
|
||||
##### Example 5: Pfäffingen
|
||||
- For 2022 it is: `Tübingen (Bühl; Hirschau; Kilchberg; Unterjesingen; Weilheim), Rottenburg (Kiebingen; Wurmlingen), Ammerbuch (Pfäffingen)`
|
||||
- For 2023 it is: `Ammerbuch (Pfäffingen)`
|
||||
- Use `Pfäffingen` as zone
|
||||
- Do NOT use `Ammerbuch` as it is used multiple times!
|
||||
|
||||
### city, street, house number
|
||||
|
||||
|
||||
32
doc/source/kingston_gov_uk.md
Normal file
32
doc/source/kingston_gov_uk.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Thr Royal Borough of Kingston Council
|
||||
|
||||
Support for schedules provided by [The Royal Borough of Kingston Council](https://kingston-self.achieveservice.com/service/in_my_area?displaymode=collections).
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: kingston_gov_uk
|
||||
args:
|
||||
uprn: UPRN_CODE
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**uprn**<br>
|
||||
*(string) (required)*
|
||||
|
||||
## Example using UPRN
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: kingston_gov_uk
|
||||
args:
|
||||
uprn: "100110140843"
|
||||
```
|
||||
|
||||
## How to get the source argument
|
||||
|
||||
An easy way to find your Unique Property Reference Number (UPRN) is by going to https://www.findmyaddress.co.uk/ and entering in your address details.
|
||||
32
doc/source/middlesbrough_gov_uk.md
Normal file
32
doc/source/middlesbrough_gov_uk.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Middlesbrough Council
|
||||
|
||||
Support for schedules provided by [Middlesbrough Council](https://www.middlesbrough.gov.uk/bin-collection-dates).
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: middlesbrough_gov_uk
|
||||
args:
|
||||
uprn: UPRN_CODE
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**uprn**<br>
|
||||
*(string) (required)*
|
||||
|
||||
## Example using UPRN
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: middlesbrough_gov_uk
|
||||
args:
|
||||
uprn: "100110140843"
|
||||
```
|
||||
|
||||
## How to get the source argument
|
||||
|
||||
An easy way to find your Unique Property Reference Number (UPRN) is by going to https://www.findmyaddress.co.uk/ and entering in your address details.
|
||||
40
doc/source/staedteservice_de.md
Normal file
40
doc/source/staedteservice_de.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Staedteservice Raunheim Rüsselsheim
|
||||
|
||||
Support for schedules provided by [staedteservice.de](https://www.staedteservice.de/leistungen/abfallwirtschaft/abfallkalender/index.html) location Raunheim and Rüsselsheim.
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: staedteservice_de
|
||||
args:
|
||||
city: CITY
|
||||
street_number: STREET NUMBER
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**city**<br>
|
||||
*(string) (required)*
|
||||
|
||||
**street_number**<br>
|
||||
*(string) (required)*
|
||||
|
||||
## Example
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: staedteservice_de
|
||||
args:
|
||||
city: "Raunheim"
|
||||
street_number: "565"
|
||||
```
|
||||
|
||||
## How to get the source arguments
|
||||
|
||||
1. Visit [https://www.staedteservice.de/leistungen/abfallwirtschaft/abfallkalender/index.html](https://www.staedteservice.de/leistungen/abfallwirtschaft/abfallkalender/index.html).
|
||||
2. Select your `city` + `street` and hit Next (Weiter).
|
||||
3. Search for the Link to the ical file. Copy the link or just hover it. It should have the following format: `https://www.staedteservice.de/abfallkalender_2_550_2022.ics`.
|
||||
4. Your `street_number` is the number before the year (number in the center). In the example above the `street_number` is the `550`.
|
||||
29
doc/source/tewkesbury_gov_uk.md
Normal file
29
doc/source/tewkesbury_gov_uk.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# Tewkesbury City Council
|
||||
|
||||
Support for upcoming schedules provided by [Tewkesbury City Council](https://www.tewkesbury.gov.uk/waste-and-recycling), serving Tewkesbury (UK) and areas of Gloucestershire.
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: tewkesbury_gov_uk
|
||||
args:
|
||||
postcode: POSTCODE
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**POSTCODE**<br>
|
||||
*(string) (required)*
|
||||
|
||||
|
||||
## Example
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: tewkesbury_gov_uk
|
||||
args:
|
||||
postcode: "GL20 5TT"
|
||||
```
|
||||
42
doc/source/wermelskirchen_de.md
Normal file
42
doc/source/wermelskirchen_de.md
Normal file
@@ -0,0 +1,42 @@
|
||||
# Wermelskirchen Abfallkalender
|
||||
|
||||
Support for schedules provided by [Abfallkalender Wermelskirchen](https://www.wermelskirchen.de/rathaus/buergerservice/formulare-a-z/abfallkalender-online/) located in NRW, Germany.
|
||||
|
||||
## Limitations
|
||||
|
||||
The used api (ics) only provides future waste collection dates.
|
||||
|
||||
## Configuration via configuration.yaml
|
||||
|
||||
```yaml
|
||||
waste_collection_schedule:
|
||||
sources:
|
||||
- name: wermelskirchen_de
|
||||
args:
|
||||
street: Telegrafenstraße
|
||||
house_number: "10"
|
||||
customize:
|
||||
- type: Restabfall 2-woechentlich
|
||||
alias: Restabfall
|
||||
show: false
|
||||
- type: Restabfall 4-woechentlich
|
||||
alias: Restabfall
|
||||
show: true
|
||||
- type: Restabfall 6-woechentlich
|
||||
alias: Restabfall
|
||||
show: false
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
**street**<br>
|
||||
*(string) (required)*
|
||||
|
||||
**house_number**<br>
|
||||
*(string) (required)*
|
||||
|
||||
## How to get the source arguments
|
||||
|
||||
Set your street and your house number. Should they not work, check on [Abfallkalender Wermelskirchen](https://www.wermelskirchen.de/rathaus/buergerservice/formulare-a-z/abfallkalender-online/) and use the closest entries.
|
||||
|
||||
Depending on your booked schedule for "Restabfall"/"Restmüll" you should set `show` in one of the types to true and the others to false.
|
||||
8
info.md
8
info.md
@@ -112,9 +112,11 @@ Currently the following service providers are supported:
|
||||
- [Stadtreinigung.Hamburg](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/stadtreinigung_hamburg.md)
|
||||
- [Stadtreinigung-Leipzig.de](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/stadtreinigung_leipzig_de.md)
|
||||
- [Stadt-Willich.de](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/stadt_willich_de.md)
|
||||
- [Städteservice Raunheim Rüsselsheim](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/staedteservice_de.md)
|
||||
- [Südbrandenburgischer Abfallzweckverband](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/sbazv_de.md)
|
||||
- [Umweltbetrieb Stadt Bielefeld](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/bielefeld_de.md)
|
||||
- [WAS Wolfsburg](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/was_wolfsburg_de.md)
|
||||
- [Wermelskirchen](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/wermelskirchen_de.md)
|
||||
|
||||
### Lithuania
|
||||
|
||||
@@ -130,10 +132,11 @@ Currently the following service providers are supported:
|
||||
- [Auckland](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/aucklandcouncil_govt_nz.md)
|
||||
- [Christchurch](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/ccc_govt_nz.md)
|
||||
- [Gore, Invercargill & Southland](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/wastenet_org_nz.md)
|
||||
- [Horowhenua District](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/horowhenua_govt_nz.md)
|
||||
- [Waipa District](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/waipa_nz.md)
|
||||
- [Wellington](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/wellington_govt_nz.md)
|
||||
|
||||
## Norway
|
||||
### Norway
|
||||
|
||||
- [Min Renovasjon](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/minrenovasjon_no.md)
|
||||
- [Oslo Kommune](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/oslokommune_no.md)
|
||||
@@ -178,9 +181,11 @@ Currently the following service providers are supported:
|
||||
- [Guildford Borough Council - guildford.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/guildford_gov_uk.md)
|
||||
- [Harborough District Council - www.harborough.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/fccenvironment_co_uk.md)
|
||||
- [Huntingdonshire District Council - huntingdonshire.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/huntingdonshire_gov_uk.md)
|
||||
- [The Royal Borough of Kingston Council - kingston.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/kingston_gov_uk.md)
|
||||
- [Lewes District Council - lewes-eastbourne.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/environmentfirst_co_uk.md)
|
||||
- [London Borough of Lewisham - lewisham.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/lewisham_gov_uk.md)
|
||||
- [Manchester City Council - manchester.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/manchester_uk.md)
|
||||
- [Middlesbrough Council - middlesbrough.gov.uk](https://www.middlesbrough.gov.uk/bin-collection-dates)
|
||||
- [Newcastle City Council - newcastle.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/newcastle_gov_uk.md)
|
||||
- [North Somerset Council - n-somerset.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/nsomerset_gov_uk.md)
|
||||
- [Nottingham City Council - nottinghamcity.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/nottingham_city_gov_uk.md)
|
||||
@@ -191,6 +196,7 @@ Currently the following service providers are supported:
|
||||
- [South Cambridgeshire District Council - scambs.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/scambs_gov_uk.md)
|
||||
- [South Norfolk and Broadland Council - southnorfolkandbroadland.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/south_norfolk_and_broadland_gov_uk.md)
|
||||
- [Stevenage Borough Council - stevenage.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/stevenage_gov_uk.md)
|
||||
- [Tewkesbury Borough Council](./doc/source/tewkesbury_gov_uk.md)
|
||||
- [City of York Council - york.gov.uk](https://github.com/mampfes/hacs_waste_collection_schedule/blob/master/doc/source/york_gov_uk.md)
|
||||
- [Walsall Council - walsall.gov.uk](./doc/source/walsall_gov_uk.md)
|
||||
- [West Berkshire Council - westberks.gov.uk](./doc/source/westberks_gov_uk.md)
|
||||
|
||||
Reference in New Issue
Block a user