<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Spark on Ayoub Fakir</title>
    <link>/tags/spark/</link>
    <description>Recent content in Spark on Ayoub Fakir</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <lastBuildDate>Fri, 19 Jul 2024 10:46:12 +0200</lastBuildDate>
    <atom:link href="/tags/spark/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>[FR] Passer de EMR vers Kubernetes pour les workloads Spark</title>
      <link>/post/fr-passer-de-emr-vers-kubernetes-pour-les-workloads-spark/</link>
      <pubDate>Thu, 18 Feb 2021 04:26:07 +0200</pubDate>
      <guid>/post/fr-passer-de-emr-vers-kubernetes-pour-les-workloads-spark/</guid>
      <description>&lt;h2 id=&#34;introduction&#34;&gt;Introduction&lt;/h2&gt;
&lt;p&gt;AWS EMR est un service AWS largement utilisé principalement pour le traitement des données massives avec Apache Spark dans un Cluster Hadoop dédié. Au-delà de sa fonction principale, EMR embarque un bon nombre d&amp;rsquo;outils open-source, certains pour le monitoring (Ganglia), et d&amp;rsquo;autres pour le requêtage des données (Hive). Plus d&amp;rsquo;informations peuvent être trouvées par &lt;a href=&#34;https://docs.aws.amazon.com/fr_fr/emr/latest/ManagementGuide/emr-what-is-emr.html&#34;&gt;ici&lt;/a&gt;.
Dépendamment du contexte, EMR peut être utilisé soit en tant qu&amp;rsquo;instance d&amp;rsquo;un cluster éphémère (par exemple en lançant un Cluster tous les 6 heures pour exécuter des jobs Spark), soit en tant que cluster permanent. C&amp;rsquo;est le cas notamment lorsque celui-ci est utilisé par plusieurs équipes, fait tourner des jobs de streaming ou lorsque l&amp;rsquo;attente de son instanciation est plus coûteuse que de le laisser tourner de manière permanente.
Cet article n&amp;rsquo;est pas nécessairement un texte pour comparer EMR à Kubernetes vu que les deux ne répondent pas aux mêmes besoins. Kubernetes s&amp;rsquo;impose de plus en plus aujourd&amp;rsquo;hui pour des raisons diverses et variées, et Spark supporte Kubernetes comme Scheduler et Resources Manager nativement, donc ça aurait été dommage de ne pas s&amp;rsquo;y pencher.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[EN] Migrating from a plain Spark Application to ZparkIO</title>
      <link>/post/en-migrating-from-a-plain-spark-application-to-zparkio/</link>
      <pubDate>Fri, 16 Oct 2020 10:36:00 +0200</pubDate>
      <guid>/post/en-migrating-from-a-plain-spark-application-to-zparkio/</guid>
      <description>&lt;h1 id=&#34;migrating-from-a-plain-spark-application-to-zio-with-zparkio&#34;&gt;Migrating from a plain Spark Application to ZIO with ZparkIO&lt;/h1&gt;
&lt;p&gt;In this article, we&amp;rsquo;ll see how you can migrate your Spark Application into &lt;a href=&#34;https://zio.dev&#34;&gt;ZIO&lt;/a&gt; and &lt;a href=&#34;https://github.com/leobenkel/ZparkIO&#34;&gt;ZparkIO&lt;/a&gt;, so you can benefit from all the wonderful features that ZIO offers and that we&amp;rsquo;ll be discussing.&lt;/p&gt;
&lt;h2 id=&#34;what-is-zio&#34;&gt;What is ZIO?&lt;/h2&gt;
&lt;p&gt;ZIO is defined, according to official documentation as &lt;strong&gt;a library for asynchronous and concurrent programming that is based on pure functional programming.&lt;/strong&gt; In other words, ZIO helps us write code with type-safe, composable and easily testable code, all by using safe and side-effect-free code.
&lt;strong&gt;ZIO is a data type&lt;/strong&gt;. Its signature, &lt;em&gt;ZIO[R, E, A]&lt;/em&gt; shows us that it has three parameters:&lt;/p&gt;</description>
    </item>
    <item>
      <title>[EN] CI/CD pipeline using Github Actions, SBT and AWS S3 - Part 1</title>
      <link>/post/en-ci/cd-pipeline-using-github-actions-sbt-and-aws-s3-part-1/</link>
      <pubDate>Wed, 08 Apr 2020 04:35:59 +0200</pubDate>
      <guid>/post/en-ci/cd-pipeline-using-github-actions-sbt-and-aws-s3-part-1/</guid>
      <description>&lt;p&gt;Github now allows us to build continuous integration and continuous deployment workflows for our Github Repositories thanks to Github Actions, for almost all Github plans.&lt;/p&gt;
&lt;p&gt;In this tutorial, we’re going to go through building a CI/CD pipeline based on a Scala / Spark project. We will be using SBT, the Scala Build Tool, which will allow us to get a jar that we’re then going to deploy to AWS S3 using a custom Github Action.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Why combine asynchronous and distributed calculations to tackle the biggest data quality challenges</title>
      <link>/post/why-combine-asynchronous-and-distributed-calculations-to-tackle-the-biggest-data-quality-challenges/</link>
      <pubDate>Fri, 17 Mar 2017 05:47:36 +0200</pubDate>
      <guid>/post/why-combine-asynchronous-and-distributed-calculations-to-tackle-the-biggest-data-quality-challenges/</guid>
      <description>&lt;p&gt;Article co-authored by Martin Delobel and available on &lt;a href=&#34;https://medium.com/decathlondigital/why-combine-asynchronous-and-distributed-calculations-to-tackle-the-biggest-data-quality-challenges-2e04dfc51401&#34;&gt;Medium&lt;/a&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>[EN] 10&#43; Great Books for Apache Spark</title>
      <link>/post/en-10-great-books-for-apache-spark/</link>
      <pubDate>Fri, 13 Jan 2017 05:45:12 +0200</pubDate>
      <guid>/post/en-10-great-books-for-apache-spark/</guid>
      <description>&lt;p&gt;This article was co-authored by &lt;a href=&#34;https://blog.matthewrathbone.com/&#34;&gt;Matthew Rathbone&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img alt=&#34;img&#34; loading=&#34;lazy&#34; src=&#34;https://d33wubrfki0l68.cloudfront.net/8177dc9c6ec5935b75460f41e29cecfebe9a5c20/2662a/img/blog/books.jpg&#34;&gt;&lt;/p&gt;
&lt;p&gt;image by &lt;a href=&#34;https://unsplash.com/photos/eeSdJfLfx1A&#34;&gt;Ed Robertson&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Apache Spark is a super useful distributed processing framework that works well with Hadoop and YARN. Many industry users have reported it to be 100x faster than Hadoop MapReduce for in certain memory-heavy tasks, and 10x faster while processing data on disk.&lt;/p&gt;
&lt;p&gt;While Spark has incredible power, it is not always easy to find good resources or books to learn more about it, so I thought I’d compile a list. I’ll keep this list up to date as new resources come out.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
