HNNewShowAskJobs
Built with Tanstack Start
Python Data Science Handbook(jakevdp.github.io)
207 points by cl3misch 12 hours ago | 39 comments
  • farhanhubble9 hours ago

    I loved his Statistics for Hackers talk: https://speakerdeck.com/pycon2016/jake-vanderplas-statistics...

    • yboris5 hours ago |parent

      Amazing Thank you for sharing.

      Reminds me of how thinking using frequencies rather than computing probabilities is easier and can avoid errors (e.g. a 99% accurate test being positive does not mean 99% likelihood of having disease for a disease with a 1/10,000 prevalence in population).

  • ellisv10 hours ago

    These types of books are always interesting to me because they tackle so many different things. They cover a range of topics at a high level (data manipulation, visualization, machine learning) and each could have its own book. They balance teaching programming while introducing concepts (and sometimes theory).

    In short I think it's hard to strike an appropriate balance between these but this seems to be a good intro level book.

  • pantsforbirds5 hours ago

    I used the Kernel Density Estimation (KDE) page/blog at my very first job. It was immensely useful and I've loved his work ever since.

  • badmonster6 hours ago

    VanderPlas' handbook remains remarkably relevant despite rapid ecosystem changes. His focus on fundamentals - NumPy, Pandas, Matplotlib - rather than trendy libraries is why. The tools change, but understanding data structures, vectorization, and visualization principles doesn't age.

  • trio84538 hours ago

    This book was absolute fire for getting started with data science in 2017-2018, Jake is a great teacher.

  • BenGosub11 hours ago

    He's a great writer and I miss his blog. He had an awesome post on pivot table that I think is now a part of this book.

    • ayhanfuat8 hours ago |parent

      He is also the creator of the Altair visualization library (Vega-Lite in Python https://altair-viz.github.io/). I really like using it.

      • linhns7 hours ago |parent

        Thanks for the fact, I used Altair sometimes and really admire the simplicity, not knowing it was written by Jake.

      • AI-NoGuardrails8 hours ago |parent

        very cool!

  • sschnei89 hours ago

    Interesting choice of Pandas in this day and age. Maybe he’s after imparting general concepts that you could apply to any tabular data manipulator rather than selecting for the latest shiny tool.

    • dahcryn9 hours ago |parent

      why? It's the industry standard as far as my reach goes.

      What other framework would you replace it with?

      No, polars or spark is not a good answer, those are optimized for data engineering performance, not a holistic approach to data science.

      • crystal_revenge8 hours ago |parent

        You can assert whatever you want, but Polars is a great answer. The performance improvements are secondary to me compared to the dramatic improvement in interface.

        Today all serious DS work will ultimately become data engineering work anyway. The time when DS can just fiddle around in notebooks all day has passed.

        • this_user6 hours ago |parent

          Pandas is widely adopted and deeply integrated into the Python ecosystem. Meanwhile, Polars remains a small niche, and it's one of those hype technologies that will likely be dead in 3 years once most of its users realise that it offers them no actual practical advantages over Pandas.

          If you are dealing with huge data sets, you are probably using Spark or something like Dask already where jobs can run in the cloud. If you need speed and efficiency on your local machine, you use NumPy outright. And if you really, really need speed, you rewrite it in C/C++.

          Polars is trying to solve an issue that just doesn't exist for the vast majority of users.

          • stdbrouw6 hours ago |parent

            Arguably Spark solves a problem that does not exist anymore: single node performance with tools like DuckDB and Polars is so good that there’s no need for more complex orchestration anymore, and these tools are sufficiently user-friendly that there is little point to switching to Pandas for smaller datasets.

          • crystal_revenge4 hours ago |parent

            > Pandas is widely adopted and deeply integrated into the Python ecosystem.

            This is pretty laughable. Yes there are very DS specific tools that make good use of Pandas, but `to_pandas` in Polars trivially solves this. The fact that Pandas always feels like injecting some weird DSL into existing Python code bases is one of the major reasons why I really don't like it.

            > If you are dealing with huge data sets, you are probably using Spark or something like Dask already where jobs can run in the cloud. If you need speed and efficiency on your local machine, you use NumPy outright. And if you really, really need speed, you rewrite it in C/C++.

            Have you used Polars at all? Or for that matter written significant Pandas outside of a notebook? The number one benefit of Polars, imho, is that Polars works using Expressions that allow you to trivially compose and reuse fundamental logic when working with data in a way the works well with other Python code. This solves the biggest problem with Pandas is that it does not abstract well.

            Not to mention that Pandas is really poor dataframe experience outside of it's original use case which was financial time series. The entire multi-index experience is awful and I know that either you are calling 'reset_index' multiple times in your Pandas logic or you have bugs.

          • minimaxir5 hours ago |parent

            > once most of its users realise that it offers them no actual practical advantages over Pandas

            What? Speed and better nested data support (arrays/JSON) alone are extremely useful to every data scientist.

            My produtivity skyrocketed after switching from pandas to polars.

        • SiempreViernes4 hours ago |parent

          >Today DS work will ultimately become data engineering work anyway.

          Oh yeah? Well in my ivory tower the work stops being serious once it becomes engineering, how do you like that elitism?!

          • crystal_revenge3 hours ago |parent

            "Data Science" has never been related to academic research, it has always emerged in a business context. I wouldn't say that researchers at Deep Mind are "data scientists", they are academic researchers who focus on shipping papers. If you're in a pure research environment, nobody cares if you write everything in Matlab.

            But the last startup I was at tried to take a similar approach to research was unable to ship a functioning product and will likely disappear in a year from now. FAIR has been largely disbanded in favor of the way more shipping-centric MSL, and the people I know at Deep Mind are increasingly finding themselves under pressure to actually produce things.

            Since you've been hanging out in an ivory tower then you might be unaware that during the peek DS frenzy (2016-2019) there were companies where data scientists were allowed to live entirely in notebooks and it was someone else's problem to ship their notebooks. Today if you have that expectation you won't last long at most companies, if you can even find a job in the first place.

            On top of that, I know quite a few people at the major LLM teams and, based on my conversations, all of them are doing pretty serious data engineering work to get things shipped even if they were hired for there modeling expertise. It's honestly hard to even run serious experiments at the scale of modern day LLMs without being pretty proficient at data engineering related tasks.

      • porker9 hours ago |parent

        > No, polars or spark is not a good answer, those are optimized for data engineering performance, not a holistic approach to data science.

        Can you expand on why Polars isn't optimised for a holistic approach to data science?

        • fifilura8 hours ago |parent

          I have not work with Polars, but I would imagine any incompatibility with existing libraries (e.g. plotting libraries like plotnine, bokeh) would quickly put me off.

          It is a curse I know. I would also choose a better interface. Performance is meh to me, I use SQL if i want to do something at scale that involves row/column data.

          • rbartelme7 hours ago |parent

            This is a non-issue with Polars dataframes to_pandas() method. You get all the performance of Polars for cleaning large datasets, and to_pandas() gives you backwards compatibility with other libraries. However, plotnine is completely compatible with Polars dataframe objects.

          • maleldil7 hours ago |parent

            You can always convert from Polars to Pandas. Plotnine will do it automatically for you, even.

      • minimaxir7 hours ago |parent

        What can you do in more easily in pandas than polars?

    • msto9 hours ago |parent

      It was originally published in 2016, and I think this is still the first edition.

    • maxnoe8 hours ago |parent

      The book is quite old actually, not sure if "this day and age" still applies to it

    • xenophonf9 hours ago |parent

      What's wrong with Pandas?

      • crystal_revenge3 hours ago |parent

        Pandas is generally awful unless you're just living in a notebook (and even then it's probably least favorite implementation of the 'data frame' concept).

        Since Pandas lacks Polars' concept of an Expression, it's actually quite challenging to programmatically interact with non-trivial Pandas queries. In Polars the query logic can be entirely independent of the data frame while still referencing specific columns of the data frame. This makes Polars data frames work much more naturally with typical programming abstractions.

        Pandas multi-index is a bad idea in nearly all contexts other than it's original use case: financial time series (and I'll admit, if you're working with purely financial time series, then Pandas feels much better). Sufficiently large Pandas code bases are littered with seemingly arbitrary uses of 'reset_index', there are many times where multi-index will create bugs, and, most important, I've never seen any non-financial scenario where anyone has ever used Multi-index to their advantage.

        Finally Pandas is slow, which is honestly the least priority for me personally, but using Polars is so refreshing.

        What other data frames have you used? Having used R's native dataframes extensively (the way they make use of indexing is so much nicer) in addition to Polars both are drastically preferable to Pandas. My experience is that most people use Pandas because it has been the only data frame implementation in Python. But personally I'd rather just not use data frames if I'm forced to used Pandas. Could you expand on what you like about Pandas over other data frames models you've worked with?

      • clickety_clack9 hours ago |parent

        I probably wouldn’t rewrite an entire data science stack that used pandas, but most people would use polars if starting a new project today.

        • biofox8 hours ago |parent

          R and Matlab workflows have been fairly stable for the past decade. Why is the Python ecosystem so... unstable? It puts me off investing any time in it.

          • clickety_clack8 hours ago |parent

            The R ecosystem has had a similar evolution with the tidyverse, it was just a little further ago. As for Matlab, I initially learned statistical programming with it a long time ago, but I’m not sure I’ve ever seen it in the wild. I don’t know what’s going on there.

            I’m actually quite partial to R myself, and I used to use it extensively back when quick analysis was more valuable to my career. Things have probably progressed, but I dropped it in favor of python because python can integrate into production systems whereas R was (and maybe still is) geared towards writing reports. One of the best things to happen recently in data science is the plotnine library, bringing the grammar of graphics to python imho.

            The fact is that today, if you want career opportunities as a data scientist, you need to be fluent in python.

          • crystal_revenge3 hours ago |parent

            I love R, but how can you make that claim when R uses three distinct object-oriented systems all at the same time? R might seem stable only because it carries along with it 50 years of history of programming languages (part of it's charm, where else can you see the generic function approach to OOP in a language that's still evolving?)

            Finally, as someone who wrote a lot of R pre-tidyverse, I've seen the entire ecosystem radically change over my career.

          • rbartelme7 hours ago |parent

            Outside bioconductor or the tidyverse in R can be just as unstable due to CRAN's package requirements.

      • amelius8 hours ago |parent

        Pandas turns 10x developers with a lust for life into 0.1x developers with grey hairs.

        • cbare3 hours ago |parent

          Ha, I think that happens regardless of the tech you use. Just blame time.

  • __rito__9 hours ago

    This is one of the few books that I read cover-to-cover when I was starting out learning Data Science in 2020/21. Will recommend.

  • wiz21c9 hours ago

    I wouldn't say it's a handbook because it's more like an introduction. But it's pretty well written.

  • synergy209 hours ago

    it's written 8 years ago though, there is a 2ed of the book by the same author.

    • phone_book8 hours ago |parent

      The linked Github seems to have the 2nd edition in the form of notebooks, https://github.com/jakevdp/PythonDataScienceHandbook/blob/ma..., under the Using Code Examples section, "attribution usually includes the title, author, publisher, and ISBN. For example: "Python Data Science Handbook, 2nd edition, by Jake VanderPlas (O’Reilly). Copyright 2023..." compared to the OP's link which has "The Python Data Science Handbook by Jake VanderPlas (O’Reilly). Copyright 2016..."