<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Zsh on galvanist</title>
    <link>/tags/zsh/</link>
    <description>Recent content in Zsh on galvanist</description>
    <generator>Hugo</generator>
    <language>en</language>
    <lastBuildDate>Fri, 15 Nov 2013 23:08:00 +0000</lastBuildDate>
    <atom:link href="/tags/zsh/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Revisiting Shell Concurrency (this time in ZSH)</title>
      <link>/posts/2013-11-15-revisiting-shell-concurrency-this-time-in-zsh/</link>
      <pubDate>Fri, 15 Nov 2013 23:08:00 +0000</pubDate>
      <guid>/posts/2013-11-15-revisiting-shell-concurrency-this-time-in-zsh/</guid>
      <description>&lt;p&gt;I&amp;rsquo;ve been thinking about &lt;a href=&#34;http://galvanist.com/post/51134915590/managed-concurrency-in-the-bash-shell&#34;&gt;concurrency in the command shell&lt;/a&gt; again. This was prompted by my ongoing transition from Bash to ZSH. I&amp;rsquo;ve decided to re-implement my &lt;code&gt;conc&lt;/code&gt; and &lt;code&gt;xconc&lt;/code&gt; functions in a slightly different way.&lt;/p&gt;&#xA;&lt;h2 id=&#34;intro&#34;&gt;Intro&lt;/h2&gt;&#xA;&lt;p&gt;Lets say you have 50 data files to compress. Here&amp;rsquo;s one way:&lt;/p&gt;&#xA;&lt;pre tabindex=&#34;0&#34;&gt;&lt;code&gt;% xz *.dat&#xA;&amp;lt;&amp;lt; completed in 2 minutes, 4 seconds &amp;gt;&amp;gt;          &#xA;%&#xA;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;That&amp;rsquo;s not bad, but it doesn&amp;rsquo;t really take advantage of all those fancy cores we have in our computers these days. Let&amp;rsquo;s run multiple &lt;code&gt;xz&lt;/code&gt; jobs as-parallel-as-possible, so that each file gets its own &lt;code&gt;xz&lt;/code&gt; process.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
