Author: sbyd2umjwvnz

  • virtualenv-tools

    Visit original content creator repository
    https://github.com/marthjod/virtualenv-tools

  • YubiKey-Guide

    Visit original content creator repository
    https://github.com/arcsource/YubiKey-Guide

  • explorer

    MIT License
    
    Copyright (c) 2016-2020 Ethereum Classic, and contributors to this project
    Copyright (c) 2018-2020 MintMe.com Coin project
    
    Permission is hereby granted, free of charge, to any person obtaining a copy
    of this software and associated documentation files (the "Software"), to deal
    in the Software without restriction, including without limitation the rights
    to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
    copies of the Software, and to permit persons to whom the Software is
    furnished to do so, subject to the following conditions:
    
    The above copyright notice and this permission notice shall be included in all
    copies or substantial portions of the Software.
    
    THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
    IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
    FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
    AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
    LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
    OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
    SOFTWARE.
    

    Visit original content creator repository
    https://github.com/mintme-com/explorer

  • google-realtime-app

    😅 NOTE: I’m Spanish speaking and my English is pretty bad, I promise to correctly rewrite the documentation so everyone can understand it. I’m sorry.

    Google Realtime App

    We need to do the following actions:

    • Add values to a Google Sheet
    • Push these values automaticaly to a Firebase Realtime Database
    • Observe the database and show the data in the frontend every time the Google Sheet data changes.

    Google Sheets (CMS) > Google Apps Script (Service) > G Suite Hub (Autom.) >Firebase Database (DB) > Angular (Front)

    Setup front and firebase hosting project

    • Open terminal and create a new Angular project ng new <project>
    • Go to your project folder cd <project>
    • Create a new project in Firebase Console »
    • Install Firebase CLI » with npm install -g firebase-tools
    • In your terminal, login into firebase firebase login (complete the auth process)
    • Init firebase project firebase init
    • Select (*) Hosting (select with space then press enter)
    • Select [don't setup a default project] (selected by default)
    • Select public folder (selected by default)
    • DON’T configure as a single-page app (selected by default)
    • Edit .firebaserc and paste the following code

    {
      "projects": {
        "default": "<your-project-id>"
      }
    }
    
    • Type firebase list and copy the project id of your project recently created then replace it in .firebaserc
    • Edit angular.json file and replace outputPath: dist/<project> with "outputPath": "public"
    • Edit .gitignore file and ignore the /public folder
    • We need to build the frontend then type npm run build
    • Make a deploy to firebase hosting with firebase deploy
    • Open your app browser and go to https://your-project-id.firebaseapp.com/ then you can see the static frontend

    💾 It is a good time for commit your changes.

    Firebase Database Realtime

    • Go to Firebase Console » > Database and create a new Realtime Database (press next on the pop up)
    • In Database go to Rules tab and set read value to true then Publish the rules on the blue message

    Google Sheet

    • Create a Google Sheet file in your Google Drive account
    • Open it and add some title and save (maybe <your-project>)
    • Then fill it with some data like this:

    console price
    Playstation 4 299
    Playstation 4 Pro 400
    Xbox One S 299
    Xbox One X 500
    Nintendo Switch 299
    Nintendo 2DS 140

    • Then in the toolbar click on Tools > Apps Script Editor (it will open a new tab)
    • Add some title and save, i recomend the same as google sheet title (maybe <your-project>)

    We need the Apps Script ID later to clone this project with Clasp » and use ES6 features

    Apps Scripts with Clasp

    The Apps Scripts must be writed in JavaScript. Of course we can write our scripts in the browser, but the browser script editor only support most of ES5 features. We are in 2019 and we need to the ES6 features!

    Afortunatelly, google had created Clasp ». It is an open-source tool, separate from the Apps Script platform, that lets you develop and manage Apps Script projects from your terminal rather than the Apps Script editor.

    • In the root folder create a new folder mkdir clasp
    • Then go to this folder cd clasp
    • Ok, open your terminal and install Clasp with npm install @google/clasp -g
    • Login to the Clasp CLI clasp login
    • Don’t create a new script, we need to clone an existing project script. Open the Apps Script project (browser) created in the previous section and go to File > Project properties then copy the Apps Script ID
    • Back to the terminal we need to clone the project clasp clone <scriptId>
    • For ES6 features, we need to write our scripts with Typescript. Then change the extension of your scripts files from *.js to *.ts

    We will deploy our scripts with Clasp CLI later.

    💾 It is a good time for commit your changes.

    Get Google Sheet data and push it to Firebase Database

    • Go to clasp/ folder and rename our new <script>.ts file to index.ts
    • Edit this and paste the following code

    function writeDataToFirebase() {
      const firebaseUrl = "<firebase_url>";
      const secret = "<firebase_secret_id>";
      const base = FirebaseApp.getDatabaseByUrl(firebaseUrl, secret);
    
      const spreadsheetId = "<google_sheet_id>";
      const rangeName = "<tab_name>";
      const data = Sheets.Spreadsheets.Values.get(spreadsheetId, rangeName).values;
    
      if (!data) {
        Logger.log("No data found.");
      } else {
        Logger.log(data.parseData());
        base.setData("consoles", data.parseData());
      }
    }
    
    Array.prototype.parseData = function() {
      const [keys, ...rows] = this;
      return rows
        .filter(row => row.length)
        .map((row, i) => {
          let obj = {};
          row.forEach((item, i) => {
            obj = { ...obj, [keys[i]]: item };
          });
          return obj;
        });
    };
    • Replace <firebase_url>, <firebase_secret_id>, <google_sheet_id>, <tab_name> with the corrects.
    • Now we need to push our Typescript Clasp script to Apps Script platform (browser) then type clasp push (you can use clasp push --watch to push each time you save the file)
    • Be careful, don’t commit/push the <firebase_secret_id>
    • Go to our Apps Script project in the browser and see the pre-compiled code from Typescript to JavaScript
    • Run the script. It will fail, because we don’t import FirebaseApp and Sheets API libraries.

    Add Sheets API

    More info: Google Sheets API v4 »

    • In the Apps Script editor, click Resources > Advanced Google Services.
    • Locate Google Sheets API in the dialog and click the corresponding toggle, setting it to on.
    • Click the Google API Console link.
    • Enter "Google Sheets API" into the search box and click on the corresponding entry in the results.
    • Click the Enable API button.
    • Return to the Apps Script editor and click the OK button on the Advanced Google Services dialog.

    Add FirebaseApp API

    More info: Connect Firebase to Google services »

    • In the script editor, click on Resources > Libraries
    • A popup box opens. Insert the following project key MYeP8ZEEt1ylVDxS7uyg9plDOcoke7-2l y the texbox and add the library (more info »)
    • Click on the box listing the different versions of the library. Select the latest public release
    • Click save. You can now use FirebaseApp.
    • Run the script again

    Google needs permissions. It is OK.

    At this point, you should have been able to push the Google Sheet data to Firebase Realtime Database.

    💾 It is a good time for commit your changes.

    “Observe” the database from Angular

    • Create a new component ng g c consoles
    • Display <app-consoles> on app.component.html
    • Serve the app with npm start of ng s
    • Install angularfire2 » with npm install firebase @angular/fire --save
    • Implement the following code

    consoles.component.ts

    Import angularfire2 and rxjs into consoles.component.ts

    import { AngularFireDatabase } from "@angular/fire/database";
    import { Observable } from "rxjs";

    Inject AngularFireDatabase service into constructor component and implement it

    export class ConsolesComponent {
      consoles: Observable<any[]>;
      constructor(db: AngularFireDatabase) {
        this.consoles = db.list("consoles").valueChanges(); // list("consoles") is the json property name in our database, same name as the column head of consoles in our Google Sheets
      }
    }

    More info: angularfire2 > lists

    consoles.component.html

    Implement the template with the async pipe

    <h1>Consoles:</h1>
    <ol>
      <li *ngFor="let console of (consoles | async)">
        <ul>
          <li>Console: {{ console.console }}</li>
          <li>Price: {{ console.price }}</li>
        </ul>
        <br />
      </li>
    </ol>

    app.module.ts

    Import Firebase module and service. Import environment variables too

    import { AngularFireModule } from "@angular/fire";
    import { AngularFireDatabase } from "@angular/fire/database";
    import { environment } from "../environments/environment"; // environment variables

    Set AngularFireDatabase as a provider

    providers: [AngularFireDatabase],

    Set AngularFireDatabase as a provider

    imports: [..., AngularFireModule.initializeApp(environment.firebase)],

    More info: Remember to set your Firebase configuration in app/app.module.ts

    • As you can see environment.firebase is not defined, you need to edit the file environment.ts and environment.prod.ts in ./src/environments
    • Go to Firebase Console > Settings > Project Config. and in the Your apps section click on the </> icon, then copy the config object {...}
    • Paste it as a second property of environment object in both files

    export const environment = {
      production: false,
      firebase: {...} // your content here
    };

    💾 It is a good time for commit your changes.

    Update database when edit the Google Sheet file

    If we have to add new consoles to the app then go to the Google Sheet file and add a new console to the list. But we need to run the script again. Fortunately we have Project Activators in Google Apps Script editor that automates the process. Let’s go there. Open the Apps Script editor of your Google Sheet

    Alternativley you can go to G Suite Developer Hub » and you can see the list of all your script projects

    • Open the editor and click on the Project Activators button to the left of “Run” scritp

    Activate the Project Activator if it wasn’t activated yet

    • Add New Activator (bottom right button)
    • Select writeDataToFirebase as the function to execute
    • On “Event source” select Google Sheet
    • On “Event type” select Edit
    • Save and complete the auth.

    Now, when you edit the Google Sheet file, the Apps Script runs and update the Firebase Realtime Database, then thanks to the RXJS Observables, we can “observe” the database changes and fetch those values whose will show in the front in real time, without reload the page.


    💖 Thank you for reaching the end. I wish it had been helpful. Give the repository a star if you liked it. Keep learning!

    Lucas Romero Di Benedetto

    Visit original content creator repository
    https://github.com/lucasromerodb/google-realtime-app

  • spark.condor

    Submit file and shell script to start Apache Spark in standalone mode on HTCondor

    Prerequisites

    • htcondor (vanilla universe)
    • python3 pip and venv
    • network access between the cluster nodes

    Preperation

    Run ./spark.venv.sh to create a python venv with pyspark.

    Note: the script deletes the symlink env/lib64 due the fact that htcondor transfers no symlinks (!?)

    Running Spark in Standalone

    run using condor_submit spark.condor -queue [num_workers]

    The default worker size is 8 CPUs and 32G RAM. You may adjust this by using the appropriate -a flags on submit or editing the job file.

    The script is currently activating a conda environment on the target node that is specified using the arguments of the executable.

    Note: The master runs on the first worker node

    Accessing the Cluster

    The script generates two .url files containing the master url and webui url

    You may check if things are working using (bash syntax) e.g. w3m $(<spark-webui.url)

    You may submit a job using (bash syntax) e.g. source env/bin/activate; spark-submit --master $(<spark-master.url) helloworld.py

    Note: that the python driver runs on the submitting node. So you probably also want to submit it as a job

    Stopping the Cluster

    To stop the jobs either call ./spark.stop.sh or manually delete spark-master.url

    Links

    Other solutions

    Notes: The parallel universe also works, however, the best effort scheduling of workers seems to be more fitting to me

    Visit original content creator repository
    https://github.com/SmartDataInnovationLab/spark.condor

  • data-transformer

    Data-Transformer Package for Laravel

    Latest Stable Version Latest Unstable Version License Total Downloads

    Data will be manipulated from production (source) to staging (target). This tool will be definitely needed when you dislike to work with real data from your users.

    The main job of this package to fake data that GDPR relevant is. Take a look at the following List:

    • Name
    • Email Address
    • Phone number
    • Credit cards
    • Date of birth
    • Place of birth
    • Identification number
    • Online data
      • IP address
      • Location data (GPS)
    • Images
    • License plate
    • Health data

    This list must be considered, when you try to work with real Data, therefore GDPR relevant data must be transformed. The Rest will be then 1:1 taken.

    Installation – Quick-start

    composer require ipunkt/data-transformer

    Or

    Alternative you can add these lines into your composer file then composer install in console command

    "require": {
    	"ipunkt/data-transformer": "^1.0"
    }
    

    Configuration

    I assume that you already have a connection in your database.php. like the following

    'mysql' => [
           'driver' => 'mysql',
           'host' => env('DB_HOST', '127.0.0.1'),
           'port' => env('DB_PORT', '3306'),
           'database' => env('DB_DATABASE', 'forge'),
           'username' => env('DB_USERNAME', 'forge'),
           'password' => env('DB_PASSWORD', ''),
           'unix_socket' => env('DB_SOCKET', ''),
           'charset' => 'utf8mb4',
           'collation' => 'utf8mb4_unicode_ci',
           'prefix' => '',
           'prefix_indexes' => true,
           'strict' => true,
           'engine' => null,
       ],
    

    You will be asked about the connection which is for source, when you run later on both of commands. php artisan transform:dump and php artisan transfrom:data

    • run this command: php artisan vendor:publish then choose Ipunkt\DataTransformer\DataTransformerServiceProvider
    • find your Config File in config/data-transformer.php
    • edit your Config for instance: name to username and/or fakeName to value or vice versa
    • run php artisan transform:dump {host} {db} {username} {password} your standard config File will be data-transformer.json you’ll find it in the root of your Application. An Example php artisan transform:dump 000.000.0.000 transformer root pw
    • 000.000.0.000 –> IP address as host
    • transformer –> DB_NAME
    • root –> USERNAME
    • pw –> PASSWORD in data-transformer.json you’ll find something like this:
    {
     "users": {
       "id": "value",
       "name": "fakeName",
       "email": "fakeEmail",
       "action_on_redeem_json": "value",
       "action_on_expire_json": "value",
       "created_at": "value",
       "updated_at": "value"
     }
    }
    

    Here’s the all list with data that could be transformed:

    • name => fakeName via faker $this->faker->name
    • email => fakeEmail via faker $this->faker->safeEmail
    • place_of_birth => fakePlaceOfBirth via faker $this->faker->country
    • data_health => fakeDataHealth via faker $this->faker->randomDigit
    • id_number => fakeID via faker $this->faker->uuid
    • phone_number => fakePhoneNumber via faker $this->faker->phoneNumber
    • credit => fakeCredit via faker $this->faker->bankAccountNumber
    • license_plate => fakeLicensePlate via faker $this->faker->randomLetter
    • image => fakeImage via faker $this->faker->image
    • ip_address => fakeIPAddress via faker $this->faker->localIpv4
    • data_location => fakeDataLocation via faker $this->faker->latitude
    • address => fakeAddress via faker $this->faker->address
    • date_of_birth => fakeDateOfBirth via faker $this->faker->dateTime()->format('Y-m-d')

    Here you can decide whether the Name must be transformed or not, for instances. If you let the Name without any change then it will be faked. If you don’t want to transform Name, then you have to replace the fakeName with value. That’s it.

    The second and last Step: run the second command php artisan transform: {host} {db} {username} {password} (like transform:dump command)
    {--target=mysql}

    you have to change the mysql to whatever it is in your database.php

    If you want to disable foreign keys that tables has/have, add the following flag foreign-keys-checks at the end of the second Command: php artisan transform:dump {source} {target} --foreign-keys-checks=no

    Note:
    if you changed your config file, then it is required otherwise you don’t need to do anything else.

    And you’re Done!

    Visit original content creator repository https://github.com/ipunkt/data-transformer
  • Factorial-of-Large-Number

    Factorial-of-Large-Numbers

    Factorial of a non-negative integer, is the multiplication of all integers smaller than or equal to n. For example factorial of 6 is 65432*1 which is 720.

    image

    Factorial of 100 has 158 digits. It is not possible to store these many digits even if we use long long int. Examples :

    Input : 100

    Output : 933262154439441526816992388562667004- 907159682643816214685929638952175999- 932299156089414639761565182862536979- 208272237582511852109168640000000000- 00000000000000

    Input :50

    Output : 3041409320171337804361260816606476884- 4377641568960512000000000000

    The following is a detailed algorithm for finding factorial.

    factorial(n)

    1. Create an array ‘res[]’ of MAX size where MAX is number of maximum digits in output.
    2. Initialize value stored in ‘res[]’ as 1 and initialize ‘res_size’ (size of ‘res[]’) as 1.
    3. Do following for all numbers from x = 2 to n. ……a) Multiply x with res[] and update res[] and res_size to store the multiplication result.

    How to multiply a number ‘x’ with the number stored in res[]? The idea is to use simple school mathematics. We one by one multiply x with every digit of res[]. The important point to note here is digits are multiplied from rightmost digit to leftmost digit. If we store digits in same order in res[], then it becomes difficult to update res[] without extra space. That is why res[] is maintained in reverse way, i.e., digits from right to left are stored.

    multiply(res[], x)

    1. Initialize carry as 0.
    2. Do following for i = 0 to res_size – 1 ….a) Find value of res[i] * x + carry. Let this value be prod. ….b) Update res[i] by storing last digit of prod in it. ….c) Update carry by storing remaining digits in carry.
    3. Put all digits of carry in res[] and increase res_size by number of digits in carry.
    Visit original content creator repository https://github.com/arnab132/Factorial-of-Large-Number
  • Argon2

    Visit original content creator repository
    https://github.com/admkopec/Argon2

  • feathers

    Feathers

    Feathers is a compositor for raven-os.

    Build instructions

    Feathers requires Libsocket to work. Please refer to https://github.com/dermesser/libsocket#building-libsocket

    mkdir -p build/<build_type>
    cd build/<build_type>
    cmake ../.. -DCMAKE_BUILD_TYPE=<BuildType> # cmake wants a CamelCase value, so "Debug" for debug, "Release" for release etc.
    make

    Run instructions

    Feathers can be run in TTY or inside another window manager. Simply run the executable.
    If you’re a frenchman like us, you may need to change the env to have the right keyboard:

    XKB_DEFAULT_LAYOUT=fr ./build/<build_type>/feathers
    

    Shortcuts

    These are the current shortcuts:

    • Alt+Return: Open terminal
    • Alt+F2: Toggle fullscreen
    • Alt+E: Change parent container tiling direction
    • Alt+H: Open next window below
    • Alt+V: Open next window on the right
    • Alt+Space: Toggle floating mode
    • Alt+Tab: Switch window (will be removed when movement key shortcuts are added)
    • Alt+Escape: Quit feathers
    • Alt+F1: Open config editor

    Albinos integration

    To work, the albinos service must be started before the compositor. This may be improved in the future.
    Feathers will create to files “configKey.txt” and “readonlyConfigKey.txt” which contain the keys to both configurations.

    You can edit settings using the albinos editor. See the albinos project for more details.

    Visit original content creator repository
    https://github.com/raven-os/feathers

  • mpdev

    mpdev

    mpdev C/C++ CI - Linux, Mac OSX

    mpdev is a music player daemon event watcher. It connects to the mpd socket and uses mpd’s idle command to listen for player events. Whenever an event occurs, mpdev can carry out various activities using user defined hooks. The idea for doing mpdev comes from mpdcron. Currently mpdev runs on linux and Mac OSX. For Linux you have binary package which mostly automates the installation and setup. For Mac OSX, one needs to manually compile, install and configure things like startup, configure environment variables, creating the sqlite3 database.

    mpdev helps in bulding a database of your played tracks. Along with a script mpdplaylist, it can generate a playlist for mpd as per your taste and mood.

    You can create scripts in $HOME/.mpdev directory. The default installation installs two scripts player and playpause in $HOME/.mpdev directory for uid 1000. The scripts are adequate for most use cases. The player script does the following

    1. scrobbles titles to last.fm and libre.fm. You have to create API keys by running lastfm-scrobbler and librefm-scrobbler one time. You can disable scrobbling by setting DISABLE_SCROBBLE environment variable.

    2. updates play counts in the sqlite stats.db. You could write your own script and update any external database.

    3. Synchronizes the ratings in the sticker (sqlite). You could write your own script and update any external database. You can also automatically rate songs to some default value by setting an environment variable AUTO_RATING by creating a file $HOME/.mpdev/AUTO_RATING. If you are using supervise from deamontools (more on that below), creating an environment variable is very easy. e.g. To have an environment variable AUTO_RATING with value 6, you just need to have a file name AUTO_RATING in /service/mpdev/variables. The file should just have 6 as the content.

    4. Update the song’s karma. Karma is a number ranging from 0 to 100. If a song is skipped, it’s karma is downgraded by 1. Karma can be downgraded only for songs rated less than 6 and played 5 times or less and whose karma is 50 or less. If a song is played twice within a day, it’s karma is upgraded by 4. If a song is played twice within a week, it’s karma is upgraded by 3. If a song is played twice within 14 days, its karma is upgraded by 2. If a song is played twice within a month, it’s karma is upgraded by 1. All songs start with a default karma of 50. A song earns a permanent karma if any of the below happen

      • it’s karma becomes 60 or more.
      • has been rated 6 or greater.
      • has been played from beginning to end for more than 5 times.

    The above four are actually done by running a hook, a script named player in $HOME/.mpdev directory. You can put your own script named player in this directory. In fact mpdev can run specific hooks for specific types of mpd events. A hook can be any executable program or script. It will be passed arguments and have certain environment variables related to the song being played, available to it. Below is a list of of events and corresponding hooks that will be executed if available.

    MPD EVENT Hook script
    SONG_CHANGE ~/.mpdev/player
    PLAY/PAUSE ~/.mpdev/playpause
    STICKER_EVENT ~/.mpdev/sticker
    MIXER_EVENT ~/.mpdev/mixer
    OPTIONS_EVENT ~/.mpdev/options
    OUTPUT_EVENT ~/.mpdev/output
    UPDATE_EVENT ~/.mpdev/update
    DATABASE_EVENT ~/.mpdev/database
    PLAYLIST_EVENT ~/.mpdev/playlist
    STORED_PLAYLIST_EVENT ~/.mpdev/stored_playlist
    PARTITION_EVENT ~/.mpdev/partition
    SUBSCRIPTION_EVENT ~/.mpdev/subscription
    MESSAGE_EVENT ~/.mpdev/message
    MOUNT_EVENT ~/.mpdev/mount
    NEIGHBOUR_EVENT ~/.mpdev/neighbour
    CUSTOM_EVENT ~/.mpdev/custom

    The hooks are passed the following arguments

    mpd-event      - Passed when the above events listed (apart from SONG_CHANGE) happen.
    player-event   - Passed when you play or pause song
    playlist-event - Passed when the playlist changes
    now-playing    - Passed when a song starts playing
    end-song       - Passed when a song finishes playing
    

    The default installation makes a copy of /usr/libexec/mpdev/player and /usr/libexec/mpdev/playpause in $HOME/.mpdev for the user with uid 1000. The player script inserts or updates songs in the stat.db, sticker.db sqlite database when a song gets played. For bulk inserts, updates, deletion, the mpdev package also comes with mpdev_update(1) and mpdev_cleanup(1) programs that help in maintaining stats.db, sticker.db sqlite databases. This two programs use SQL Transactions and are extremely fast and takes just a few seconds to create/update the stats.db, sticker.db sqlite databases. You can disable all database updates by the player script, by setting NO_DB_UPDATE environment variable. In such a case, the player script will just print information about the song being played and actions like pause, stop, play. The default installation also makes a copy of /usr/libexec/mpdev/output and /usr/libexec/mpdev/mixer in $HOME/mpdev. At the moment, these scripts just print information of the state of output devices and the volume level. You can edit them if you can think of something useful. If you are installing mpdev from the Open Build Service repository, the mpdev sevice will automatically update these scripts if they change in the package. If you don’t want auto-udpate you can create an override file. e.g. If you create a file $HOME/.mpdev/.player.nooverwrite, the script $HOME/.mpdev/player will never be replaced with a newer script from the binary package.

    Example 1. create stats.db in the current directory

    $ time mpdev_update -S -j -t -D 0 -d stats.db
    Processed 42630 rows, Failures 0 rows, Updated 42636 rows
    
    real    0m0.830s
    user    0m0.405s
    sys     0m0.096s
    

    Example 2. Update stats.db in the current directory to add new songs

    $ time mpdev_update -S -j -t -D 0 -d stats.db
    Processed 42636 rows, Failures 0 rows, Updated 6 rows
    
    real    0m0.725s
    user    0m0.353s
    sys     0m0.067s
    

    Environment Variables available to hooks

    Environment variable Description
    SONG_ID Set to the ID of song in mpd database
    SONG_URI Set to the full path of the music file
    SONG_TITLE Set to the title of the song
    SONG_ARTIST Set to the song artist
    SONG_ALBUM Set to the song album
    SONG_DATE Set to the Date tag of the song
    SONG_GENRE Set to the Genre tag of the song
    SONG_TRACK Set to the track number of the song
    SONG_DURATION Set to the song duration
    SONG_PLAYED_DURATION Set to the duration for which the song was played
    ELAPSED_TIME Total time elapsed within the current song in seconds, but with higher resolution
    SONG_POSITION Set to position of the current song
    SONG_LAST_MODIFIED Set to the last modified time of the song
    START_TIME Time at which the song play started
    END_TIME Time at which the song ended
    PLAYER_STATE Set when you pause/resume player
    DURATION Set to song duration as a floating point number
    SCROBBLER_LASTFM Set to 1 if tracks are getting scrobbled to lastfm. Will not be set if DISABLE_SCROBBLE environment variable is set.
    SCROBBLER_LIBREFM Set to 1 if tracks are getting scrobbled to librefm. Will not be set if DISABLE_SCROBBLE environment variable is set.
    OUTPUT_CHANGED Set when you enable or disable any output devices. This indicates the new state of an output device.
    VOLUME Set during startup and when you change mixer volume. This indicates the volume level as a percentage.
    VERBOSE Verbose level of mpdev

    Apart from the above environment variables, the state of all output device are available as environment variable to the script ~/.mpdev/output. Below example is for a case where three devices listed in /etc/mpd.conf – (Piano DAC Plus 2.1 with ID 0, Xonar EssenceOne with ID 1 and Scarlett 2i2 USB with ID 2). e.g. To check if the first device is enabled, you can just query the environment variable OUTPUT_0_STATE. Whenever the state of any output device is changed (enabled or disabled), the script ~/.mpdev/output gets called with access to the following environment variables.

    OUTPUT_0_ID=0
    OUTPUT_0_NAME=Piano DAC Plus 2.1
    OUTPUT_0_STATE=enabled
    OUTPUT_1_ID=1
    OUTPUT_1_NAME=Xonar EssenceOne
    OUTPUT_1_STATE=disabled
    OUTPUT_2_ID=2
    OUTPUT_2_NAME=Scarlett 2i2 USB
    OUTPUT_2_STATE=disabled
    

    If you create the stats database, mpdev will update the last_played field, play_count fields in stats db. It will also update the song rating that you choose for the song. The ability to rate songs in mpd can be enabled by having the sticker_file keyword uncommented in /etc/mpd.conf. You will also need a mpd client that uses the mpd sticker command. One such player is cantata, which is available for all linux distros and Mac OSX.

    The sticker database can be enabled by having the followinng entry in /etc/mpd.conf

    #
    # The location of the sticker database.  This is a database which
    # manages dynamic information attached to songs.
    #
    sticker_file                    "/var/lib/mpd/sticker.db"
    #
    

    You need to restart mpdev for the new environment variables to be available to mpdev. To restart the mpdev daemon, you then just need to run the following command svc -r /service/mpdev. Below are few operations to stop, start and restart mpdev.

    $ sudo svc -d /service/mpdev # this stops  the mpdev daemon
    $ sudo svc -u /service/mpdev # this starts the mpdev daemon
    $ sudo svc -r /service/mpdev # this stops and restarts the mpdev daemon
    

    If you do a source install and want to have mpdev started by supervise you need to do the following

    $ sudo apt-get install daemontools
    $ sudo ./create_service --servicedir=/service --servicedir=/service \
        --user=1000 --add-service
    
    If you want mpdev to print song information to a LCD display on a remote
    host running lcdDaemon
    $ sudo ./create_service --servicedir=/service --servicedir=/service \
        --user=1000 --lcdhost=IP --lcdport=1806 --add-service
    

    If you want to do a source install and not have the supervise to run mpdev daemon, you could write a simple script and call it in a rc script during boot. If you don’t use supervise, you need some knowledge of shell scripting. A very simple example of such a script is below. Another problem of not using supervise will be that you will have to enable your script to be called in some rc or systemd script whenever your machine is started.

    #!/bin/sh
    while true
    do
        env HOME=/home/pi \
            MPDEV_TMPDIR=/tmp/mpdev \
            XDG_RUNTIME_DIR=/home/pi \
            PATH=\$PATH:/usr/bin:/bin \
            AUTO_RATING=6 \
        /usr/bin/mpdev -v
        sleep 1
    done
    

    The stats database can be created running the mpdev_update program or by running sqlite3 program using the following SQL script.

    CREATE TABLE IF NOT EXISTS song(
            id              INTEGER PRIMARY KEY,
            play_count      INTEGER DEFAULT 0,
            rating          INTEGER DEFAULT 0,
            uri             TEXT UNIQUE NOT NULL,
            duration        INTEGER,
            last_modified   INTEGER,
            date_added      INTEGER DEFAULT (strftime('%s','now')),
            artist          TEXT,
            album           TEXT,
            title           TEXT,
            track           TEXT,
            name            TEXT,
            genre           TEXT,
            date            TEXT,
            composer        TEXT,
            performer       TEXT,
            disc            TEXT,
            last_played     INTEGER,
            karma           INTEGER NOT NULL CONSTRAINT karma_percent CHECK (karma >= 0 AND karma <= 100) DEFAULT 50
    );
    
    CREATE INDEX rating on song(rating);
    CREATE INDEX uri on song(uri);
    CREATE INDEX last_played on song(last_played);
    CREATE INDEX date_added on song(date_added);
    CREATE INDEX last_modified on song(last_modified);
    

    To use lastfm-scrobbler, librefm-scrobbler, you need to have your own last.fm API keys. Once you have created your API keys, you can view the same at the api accounts page.

    The scrobbler modules lastfm-scrobbler, librefm-scrobbler enables scrobbling to last.fm and libre.fm and can be enabled by running lastfm-scrobbler and librefm-scrobbler once as shown below

    $ lastfm-scrobbler  --connect --api_key="abc123" --api_sec="xyz890"
    $ librefm-scrobbler --connect --api_key="abc123" --api_sec="xyz890"
    

    As you can see above, the same key generated for lastfm can be used for librefm.

    After you have added the connection you should have lastfm-scrobbler.conf file in ~/.config/lastfm-scrobbler and librefm-scrobbler.conf file in ~/.config/librefm-scrobbler. These two files will have your SESSION_KEY, API_KEY and API_SEC. lastfm-scrobbler and librefm-scrobbler are a copy of moc-scrobbler from https://gitlab.com/pachanka/moc-scrobbler. moc-scrobbler is distributed under the terms of MIT License. mpdev will not do scrobbling if the SESSION_KEY, API_KEY and API_SEC haven’t been setup, or if the environment variable DISABLE_SCROBBLE is set

    By default mpdev will get the host name and port for mpd from MPD_HOST and MPD_PORT environment variables. MPD_PASSWORD environment variable can be set to make mpdev connect to a password-protected mpd. If these environment variables aren’t set, mpdev connects to localhost on port 6600.. You can look at the logs while your songs are playing using tail -f /var/log/svc/mpdev/current

    Installation

    Source Installation

    Take a look at BuildRequires field in mpdev.spec and Build-Depends field in debian/mpdev.dsc

    mpdev uses library from libqmail. If you are building mpdev from source, you need to install libqmail.

    $ cd /usr/local/src
    $ git clone --no-tags --no-recurse-submodules --depth=1 https://github.com/mbhangui/libqmail
    $ cd libqmail
    $ ./default.configure
    $ make && sudo make install-strip
    
    $ ./configure --prefix=$prefix --libexecdir=$prefix/libexec/mpdev
    $ make
    $ sudo make install
    

    Create RPM / Debian packages

    This is done by running the create_rpm / create_debian scripts. (Here version refers to the existing version of mpdev package)

    Create debian package

    $ pwd
    /usr/local/src/mpdev
    $ ./create_rpm
    $ ls -l $HOME/rpmbuild/RPMS/x86_64/mpdev\*
    -rw-rw-r--. 1 mbhangui mbhangui  290567 Feb  8 09:05 mpdev-0.1-1.1.fc31.x86_64.rpm
    

    Create debian package

    $ pwd
    /usr/local/src/mpdev
    $ ./create_debian
    $ ls -l $HOME/stage/mpdev*
    -rw-r--r--  1 mbhangui mbhangui  558 Jul  2 19:30 mpdev.1-1.1_amd64.buildinfo
    -rw-r--r--  1 mbhangui mbhangui  981 Jul  2 19:30 mpdev.1-1.1_amd64.changes
    -rw-r--r--  1 mbhangui mbhangui 2916 Jul  2 19:30 mpdev.1-1.1_amd64.deb
    

    Prebuilt Binaries

    mpdev

    Prebuilt binaries using openSUSE Build Service are available for mpdev for

    • Debian (including ARM images for Debian 10 which work (and tested) for RaspberryPI (model 2,3 & 4) and Debian 9 for Allo Sparky)
    • Raspbian 10 and 11 for RaspberryPI (ARM images)
    • openSUSE Tumbleweed (x86_64)
    • Arch Linux (x86_64)
    • Fedora (x86_64)
    • Ubuntu (x86_64 and ARM images for BananaPI)

    Use the below url for installation

    https://software.opensuse.org//download.html?project=home%3Ambhangui%3Araspi&package=mpdev

    IMPORTANT NOTE for debian if you are going to use supervise from daemontools package

    debian/ubuntu repositories already has daemontools and ucspi-tcp which are far behind in terms of feature list that the indimail-mta repo provides. When you install mpdev, apt-get may pull the wrong version with limited features. Also apt-get install mpdev will get installed with errors, leading to an incomplete setup. You need to ensure that the two packages get installed from the indimail-mta repository instead of the debian repository. If you don’t do this, mpdev will not function correctly as it depends on setting of proper global envirnoment variables. Global environment variabels are not supported by daemontools from the official debian repository. Additionally, the official ucspi-tcp package from the debian repository doesn’t support TLS, which will result in services that depend on TLS not functioning.

    All you need to do is set a higher preference for the indimail-mta repository by creating /etc/apt/preferences.d/preferences with the following contents

    $ sudo /bin/bash
    # cat > /etc/apt/preferences.d/preferences <<EOF
    Package: *
    Pin: origin download.opensuse.org
    Pin-Priority: 1001
    EOF
    

    You can verify this by doing

    $ apt policy daemontools ucspi-tcp
    daemontools:
      Installed: 2.11-1.1+1.1
      Candidate: 2.11-1.1+1.1
      Version table:
         1:0.76-7 500
            500 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
     *** 2.11-1.1+1.1 1001
           1001 http://download.opensuse.org/repositories/home:/indimail/Debian_10  Packages
            100 /var/lib/dpkg/status
    ucspi-tcp:
      Installed: 2.11-1.1+1.1
      Candidate: 2.11-1.1+1.1
      Version table:
         1:0.88-6 500
            500 http://raspbian.raspberrypi.org/raspbian buster/main armhf Packages
     *** 2.11-1.1+1.1 1001
           1001 http://download.opensuse.org/repositories/home:/indimail/Debian_10/ Packages
            100 /var/lib/dpkg/status
    

    Building your own mpd music server

    Take a look at pistop and this README.

    Visit original content creator repository https://github.com/mbhangui/mpdev