Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 13 additions & 13 deletions docs/source/examples/dataframes-example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -85,8 +85,8 @@
"\n",
"items, dataframe = geodes.search_items(\n",
" query={\n",
" \"spaceborne:continentsID\": {\"eq\": \"AF\"},\n",
" \"temporal:endDate\": {\"gte\": date},\n",
" \"continent_code\": {\"eq\": \"AF\"},\n",
" \"end_datetime\": {\"gte\": date},\n",
" },\n",
" return_df=True,\n",
" get_all=False,\n",
Expand All @@ -98,7 +98,7 @@
"id": "f642b56e-1b3c-4b11-b0dd-84cf15481027",
"metadata": {},
"source": [
"Let's add to our result dataframe the column `spaceborne:cloudCover` : "
"Let's add to our result dataframe the column `eo:cloud_cover` : "
]
},
{
Expand All @@ -112,7 +112,7 @@
"source": [
"from pygeodes.utils.formatting import format_items\n",
"\n",
"dataframe = format_items(dataframe, {\"spaceborne:cloudCover\"})"
"dataframe = format_items(dataframe, {\"eo:cloud_cover\"})"
]
},
{
Expand Down Expand Up @@ -621,7 +621,7 @@
}
],
"source": [
"dataframe.explore(column=\"spaceborne:cloudCover\", cmap=\"Blues\")"
"dataframe.explore(column=\"eo:cloud_cover\", cmap=\"Blues\")"
]
},
{
Expand All @@ -630,7 +630,7 @@
"metadata": {},
"source": [
"### With literal data\n",
"It can also work with literal data, like `spaceborne:productLevel` : "
"It can also work with literal data, like `processing:level` : "
]
},
{
Expand All @@ -642,7 +642,7 @@
},
"outputs": [],
"source": [
"dataframe = format_items(dataframe, {\"spaceborne:productLevel\"})"
"dataframe = format_items(dataframe, {\"processing:level\"})"
]
},
{
Expand Down Expand Up @@ -918,7 +918,7 @@
}
],
"source": [
"dataframe.explore(column=\"spaceborne:productLevel\", cmap=\"Dark2\")"
"dataframe.explore(column=\"processing:level\", cmap=\"Dark2\")"
]
},
{
Expand Down Expand Up @@ -962,7 +962,7 @@
"source": [
"import matplotlib.pyplot as plt\n",
"\n",
"dataframe.plot(column=\"spaceborne:productLevel\", legend=True)"
"dataframe.plot(column=\"processing:level\", legend=True)"
]
},
{
Expand Down Expand Up @@ -1003,7 +1003,7 @@
}
],
"source": [
"dataframe.plot(kind=\"hist\", column=\"spaceborne:cloudCover\", range=(0, 100))"
"dataframe.plot(kind=\"hist\", column=\"eo:cloud_cover\", range=(0, 100))"
]
},
{
Expand All @@ -1017,9 +1017,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "demo_pygeodes",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "demo_pygeodes"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -1031,7 +1031,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.4"
"version": "3.10.12"
}
},
"nbformat": 4,
Expand Down
4 changes: 2 additions & 2 deletions docs/source/examples/s3_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -145,8 +145,8 @@
"items, dataframe = geodes.search_items(\n",
" intersects=geometry,\n",
" query={\n",
" \"spaceborne:cloudCover\": {\"lte\": 5},\n",
" \"temporal:endDate\": {\"gte\": date},\n",
" \"eo:cloud_cover\": {\"lte\": 5},\n",
" \"end_datetime\": {\"gte\": date},\n",
" },\n",
")"
]
Expand Down
14 changes: 7 additions & 7 deletions docs/source/examples/search-and-download.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -105,8 +105,8 @@
"from pygeodes.utils.datetime_utils import complete_datetime_from_str\n",
"\n",
"query = {\n",
" \"spaceborne:tile\": {\"eq\": \"T31TCK\"},\n",
" \"temporal:endDate\": {\"gte\": complete_datetime_from_str(\"2023-01-01\")},\n",
" \"grid:code\": {\"eq\": \"T31TCK\"},\n",
" \"end_datetime\": {\"gte\": complete_datetime_from_str(\"2023-01-01\")},\n",
"}\n",
"items, dataframe = geodes.search_items(query=query)"
]
Expand Down Expand Up @@ -402,7 +402,7 @@
"source": [
"from pygeodes.utils.formatting import format_items\n",
"\n",
"dataframe_new = format_items(dataframe, {\"spaceborne:cloudCover\"})"
"dataframe_new = format_items(dataframe, {\"eo:cloud_cover\"})"
]
},
{
Expand All @@ -423,7 +423,7 @@
},
"outputs": [],
"source": [
"dataframe_filtered = dataframe_new[dataframe_new[\"spaceborne:cloudCover\"] < 30]"
"dataframe_filtered = dataframe_new[dataframe_new[\"eo:cloud_cover\"] < 30]"
]
},
{
Expand Down Expand Up @@ -934,9 +934,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "demo_finale",
"display_name": "pygeodes",
"language": "python",
"name": "demo_finale"
"name": "pygeodes"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -948,7 +948,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.4"
"version": "3.11.10"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide/cli.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ This will give you an **overview** of the results (as it's not very convenient t
Searching collections
---------------------

You can also search for collections by adding the parameter :option:`-c, --collections`. It allow only one argument, with is a search term that will be searched in the description and the title of the collections.
You can also search for collections by adding the parameter :option:`-cs, --collections_search`. It allow only one argument, with is a search term that will be searched in the description and the title of the collections.
For example, to search a collection which is related to the term *grd*, you can do :

.. code-block:: bash
Expand Down
8 changes: 1 addition & 7 deletions docs/source/user_guide/download_item.rst
Original file line number Diff line number Diff line change
Expand Up @@ -83,10 +83,4 @@ Downloading from S3
-------------------

If you provided your S3 credentials in your conf, you can use `boto3 <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration>`__ to download items directly from the datalake.
Provided your conf contains your S3 credentials (see :doc:`configuration`), any use of ``geodes.download_item_archive`` or ``item.download_archive`` will use the S3 client instead of geodes.
If you wish to use the s3 client for other purpose (exploring buckets for example), you can use the S3 client this way :

.. code-block:: python

for bucket in geodes.s3_client.buckets.all(): # s3_client is already configured with your credentials
print(bucket.name)
Provided your conf contains your S3 credentials (see :doc:`configuration`), any use of ``geodes.download_item_archive`` or ``item.download_archive`` will use the S3 client instead of geodes.
28 changes: 4 additions & 24 deletions docs/source/user_guide/manipulating_objects.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,14 @@ From STAC objects to dataframes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To create you first dataframe from STAC objects, you can use :py:func:`pygeodes.utils.formatting.format_collections` and :py:func:`pygeodes.utils.formatting.format_items`.
For example from a list of :py:class:`pygeodes.utils.stac.Item`, if I want to create a dataframe and add the column ``spaceborne:cloudCover`` :
For example from a list of :py:class:`pygeodes.utils.stac.Item`, if I want to create a dataframe and add the column ``eo:cloud_cover`` :

.. code-block:: python

from pygeodes.utils.formatting import format_items
items = [item1,item2,...]

dataframe = format_items(items,columns_to_add={"spaceborne:cloudCover"})
dataframe = format_items(items,columns_to_add={"eo:cloud_cover"})

But if I put a dataframe instead of a list of items in ``format_items``, the columns will be added to the ones already in the dataframe.

Expand All @@ -40,8 +40,8 @@ After having added the columns you want, you can filter your data using the data

.. code-block:: python

dataframe = format_items(items,columns_to_add={"spaceborne:cloudCover"})
filtered = dataframe[dataframe["spaceborne:cloudCover"] <= 10]
dataframe = format_items(items,columns_to_add={"eo:cloud_cover"})
filtered = dataframe[dataframe["eo:cloud_cover"] <= 10]

.. seealso::

Expand All @@ -60,26 +60,6 @@ Once we filtered our dataframe of items, we could want to download them, so we n
for item in items:
item.download_archive()

.. _serialization_of_dataframes:
Serialization of dataframes
^^^^^^^^^^^^^^^^^^^^^^^^^^^

You could want to serialize a dataframe to work with it later, it's possible using :py:func:`pygeodes.utils.formatting.export_dataframe`

.. code-block:: python

from pygeodes.utils.formatting import export_dataframe

export_dataframe(dataframe,"df.json")

and you can load it later using :py:func:`pygeodes.utils.formatting.load_dataframe` :

.. code-block:: python

from pygeodes.utils.formatting import export_dataframe

dataframe = load_dataframe("df.json")

Plotting and exploring data using dataframes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Expand Down
59 changes: 24 additions & 35 deletions docs/source/user_guide/quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,19 @@
"metadata": {
"tags": []
},
"outputs": [],
"outputs": [
{
"ename": "ImportError",
"evalue": "cannot import name 'Geodes' from 'pygeodes' (unknown location)",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[0;32mIn[1], line 1\u001b[0m\n\u001b[0;32m----> 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;21;01mpygeodes\u001b[39;00m\u001b[38;5;250m \u001b[39m\u001b[38;5;28;01mimport\u001b[39;00m Geodes\n\u001b[1;32m 3\u001b[0m geodes \u001b[38;5;241m=\u001b[39m Geodes()\n",
"\u001b[0;31mImportError\u001b[0m: cannot import name 'Geodes' from 'pygeodes' (unknown location)"
]
}
],
"source": [
"from pygeodes import Geodes\n",
"\n",
Expand Down Expand Up @@ -451,7 +463,7 @@
"source": [
"new_dataframe = format_collections(\n",
" collections,\n",
" columns_to_add={\"summaries.dcs:satellite\", \"summaries.dcs:sensor\"},\n",
" columns_to_add={\"summaries.constellation\", \"summaries.instruments\"},\n",
")"
]
},
Expand Down Expand Up @@ -604,7 +616,7 @@
"id": "003205c0-7707-4d1d-9a1e-0bbf6922176e",
"metadata": {},
"source": [
"We see we can use `spaceborne:absoluteOrbitID`. Let's search for example those whose orbit direction is 30972: "
"We see we can use `sat:absolute_orbit`. Let's search for example those whose orbit direction is 30972: "
]
},
{
Expand Down Expand Up @@ -646,8 +658,8 @@
}
],
"source": [
"query = {\"spaceborne:absoluteOrbitID\": {\"eq\": 30972}}\n",
"items, dataframe = geodes.search_items(query=query)"
"query = {\"sat:absolute_orbit\": {\"eq\": 30972}}\n",
"items, dataframe = geodes.search_items(query=query, collections=['PEPS_S2_L1C'])"
]
},
{
Expand All @@ -658,29 +670,6 @@
"Again, we come out with an `items` object, and a `dataframe` object."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "68ab9b50-6120-4eb6-85ff-fefabdb96708",
"metadata": {
"tags": []
},
"outputs": [
{
"data": {
"text/plain": [
"852"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"len(items)"
]
},
{
"cell_type": "code",
"execution_count": 14,
Expand Down Expand Up @@ -973,7 +962,7 @@
"id": "8f3688df-a3d6-4e11-9138-1ab2c4e1e958",
"metadata": {},
"source": [
"We see we can use `spaceborne:cloudCover`. We can add it using `format_items`."
"We see we can use `eo:cloud_cover`. We can add it using `format_items`."
]
},
{
Expand All @@ -988,7 +977,7 @@
"from pygeodes.utils.formatting import format_items\n",
"\n",
"new_dataframe = format_items(\n",
" dataframe, columns_to_add={\"spaceborne:cloudCover\"}\n",
" dataframe, columns_to_add={\"eo:cloud_cover\"}\n",
")"
]
},
Expand Down Expand Up @@ -1217,8 +1206,8 @@
"outputs": [],
"source": [
"filtered = new_dataframe[\n",
" (new_dataframe[\"spaceborne:cloudCover\"] <= 40)\n",
" & (new_dataframe[\"spaceborne:cloudCover\"] >= 39)\n",
" (new_dataframe[\"eo:cloud_cover\"] <= 40)\n",
" & (new_dataframe[\"eo:cloud_cover\"] >= 39)\n",
"]"
]
},
Expand Down Expand Up @@ -2306,9 +2295,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "demo_pygeodes",
"display_name": "pygeodes",
"language": "python",
"name": "demo_pygeodes"
"name": "pygeodes"
},
"language_info": {
"codemirror_mode": {
Expand All @@ -2320,7 +2309,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.4"
"version": "3.11.10"
}
},
"nbformat": 4,
Expand Down
8 changes: 1 addition & 7 deletions docs/source/user_guide/search_collections.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ But you can also provide a query in JSON format :

.. code-block:: python

query = {'id' : {'contains' : 'PEPS'}}
query = {'title' : {'contains' : 'PEPS'}}
collections,dataframe = geodes.search_collections(query=query)

.. seealso::
Expand All @@ -38,12 +38,6 @@ If you wish to get only the collections, you can use the parameter ``return_df=F

collections = geodes.search_collections(query=query,return_df=False)

By default, it returns all the objects corresponding to your query, so it can be long (making many API calls) if your query is not really precise. You could just want a little overview of the objects, you can set the parameter ``get_all=False``, to get just the first items returned (by making just one API call).

.. code-block:: python

collections = geodes.search_collections(query=query,return_df=False,get_all=False)

.. seealso::

You can refer to the implementation of ``search_collections`` for further details (:py:meth:`Geodes.search_collections`)
Loading