Category: Blog

  • faust-things

    Faust Things

    This repository contains various audio related projects programmed in Faust.

    Granola

    Granola is a monophonic granular live feed processor.

    The grain processor is inspired by the Mutable Instruments Beads. The grain window shape control is inspired by the GR-1 Granular synthesizer from Tasty Chips Electronics.

    Specifications

    • Audio I/O
      • Manual input and output gain control.
      • Recording time: BUFFER_DURATION.
      • The FREEZE button freezes the content of the recording table.
      • Feedback path with attenuation and limiter (with 1 sample delay). The feedback signal comes from the grains output (it’s before the dry/wet crossfader).
      • Dry/Wet control.
      • TODO: Stereo I/O with automatic level detection with a limiter.
      • TODO: Gate signal in sync with the grains.
      • TODO: Antialiased output.
      • TODO: Spatialized output.
    • Grains generation modes
      • The SEED button triggers a grain.
      • Automatically trigger grains at a periodic rate with the DENSITY parameter (at maximum density there are 1000 grains generated per second (M.I. Beads has a maximum rate of ~260 grains per second).
        Note: The actual number of triggered grains cannot exceed the CONCURRENT_GRAINS value (30 for M.I. Beads).
      • TODO: Automatically trigger grains at a randomized rate.
      • TODO: Start a chain of delayed and pitched grains instead of a single one.
    • Grain parameters
      • TIME: Controls the playback position of each grain within the table. In other words, it delays the grains.
      • SIZE: Grain duration from 0.03 seconds to the table length, forward or backward playback.
      • SHAPE: The shape of the grain envelope. The shape control allows to morph the shape from a square (in this case the grain original amplitude is maintained), to an inverted saw (slow release), to a triangle (attack and release time are the same), and finally to a saw (slow attack).
        Window envelope from the shape parameter value
      • PITCH: The pitch of the grain (-2..+2 octaves).
        Note: The four grain parameters are latched when a grain is triggered. Hence, the grain parameters remain the same throughout the grain playback but they may differ for multiple grains.
      • TODO: TIME slew limiter for tape like scrubing effect.

    Usage

    _ : Granola(BUFFER_DURATION, CONCURRENT_GRAINS).ui(uix) : _
    

    Where:

    • BUFFER_DURATION: buffer duration in seconds.
    • CONCURRENT_GRAINS: (int) number of grains allocated.
    • uix: (int) the UI instance number.

    A demo function of Granola with a limiter, a LPF and a reverb on the output.

    _ : Granola(BUFFER_DURATION, CONCURRENT_GRAINS).demo : _, _
    

    Examples

    // A Granola's demonstration setup with a 5 seconds buffer and an up-to-15-grains polyphony.
    process = Granola(5, 15).demo;
    
    // A Granola grain processor.
    // It has a 1 second audio buffer and up to 30 grains playing a the same time.
    process = Granola(1, 30).ui(0);
    
    // Two Granola instances allowing to process each channel of a stereo signal differently.
    // They have a 5 seconds audio buffer and up to 15 grains playing a the same time.
    process = Granola(5, 15).ui(0), Granola(5, 15).ui(1);
    
    // Two Granola instances sharing the same user interface making a stereo grain processor.
    // They have a 5 seconds audio buffer and up to 15 grains playing a the same time.
    process = Granola(5, 15).ui(0), Granola(5, 15).ui(0);
    
    // As the Granola grain processor is able to play many copies of the input signal simultaneously,
    // it may rapidly saturate its output. This could be avoided by manualy reducing the output gain,
    // by selecting a smoother grain-window shape and/or by placing a limiter in the circuit path.
    // Also note that Granola pairs well with a reverb.
    process = Granola(2, 30).ui(0) : co.limiter_1176_R4_mono <: dm.zita_light;
    
    // Granola used as a delay like effect. Parameters are taylored for the default audio file
    // of the Faust web IDE (wait 5 seconds to let the table be filled). Play it looped.
    process = Granola(5, 10).grains(0, _, 4.76, 0, 0.5, 0, 0.5, 0.5, 0.03, 0.5, 1, 0, 0, 0.5);
    
    // Granola used as a complex feedback effect: the most important parameter here is the 6th,
    // the feedback control. Feed the input with some audio (the looped  default audio file, for
    // example). Let the feedback grow. It will gradually decrease when the sound is muted.
    //
    // __Be careful with your ears, this can get very loud.__
    //
    process = Granola(5, 10).grains(0, _, 4.72, 0, 1, 0.4, 1, 0.6, 0.604, 3, 0.972, 0, 0, 0.5);
    
    Visit original content creator repository https://github.com/jlp6k/faust-things
  • erlang-poker-ml

    Erlang Poker Machine-Learning

    This is an application that uses statistical data collected over time to predict outcomes in Texas Hold’em matches. It provides a CLI interface that prompts the user to fill the card in his/her hand and table for each round, and whenever the user won or lost each match. This information is stored and queried using Eresye, a rule-based knowledge management engine, and the probabilities of occurrence for each card per round, along with the probability of winning or losing the match with the current cards, are recalculated at the end of every match and organized into a decision tree. At each round, the probability of getting a good hand and win the match is read from the Decision tree and printed to the user, suggesting him to either raise, call or fold.

    The entire application and the state-less decision tree classifier are fully implemented in Erlang, in the file ‘poker.erl’. A trained model containing the history of matches and ranked hands is also provided (‘storage’ file), so you can start testing the application immediately.

    You can find more info about the implementation of this app on my blog.

    Visit original content creator repository
    https://github.com/EdDuarte/erlang-poker-ml

  • language-ext

    lang-ext

    C# Functional Programming Language Extensions

    This library uses and abuses the features of C# to provide a pure functional-programming framework that, if you squint, can look like extensions to the language itself. The desire here is to make programming in C# much more robust by helping the engineer’s inertia flow in the direction of declarative and pure functional code rather than imperative. Using these techniques for large code-bases can bring tangible benefits to long-term maintenance by removing hidden complexity and by easing the engineer’s and team’s cognitive load.

    GitHub Discussions

    Author on…

    Contents

    Reference

    Nu-get

    Nu-get package Description
    LanguageExt.Core All of the core types and functional ‘prelude’. This is all that’s needed to get started.
    LanguageExt.FSharp F# to C# interop package. Provides interop between the LanguageExt.Core types (like Option, List and Map) to the F# equivalents, as well as interop between core BCL types and F#
    LanguageExt.Parsec Port of the Haskell parsec library
    LanguageExt.Rx Reactive Extensions support for various types within the Core
    LanguageExt.Sys Provides an effects wrapper around the .NET System namespace making common IO operations pure and unit-testable

    lang-ext

    Getting started

    To use this library, simply include LanguageExt.Core.dll in your project or grab it from NuGet. It is also worth setting up some global using for your project. This is the full list that will cover the key functionality and bring it into scope:

    global using LanguageExt;
    global using LanguageExt.Common;
    global using static LanguageExt.Prelude;
    global using LanguageExt.Traits;
    global using LanguageExt.Effects;
    global using LanguageExt.Pipes;
    global using LanguageExt.Pretty;
    global using LanguageExt.Traits.Domain;

    A minimum, might be:

    global using LanguageExt;
    global using static LanguageExt.Prelude;

    The namespace LanguageExt contains most of the core types; LanguageExt.Prelude contains the functions that bring into scope the prelude functions that behave like standalone functions in ML style functional programming languages; LanguageExt.Traits brings in the higher-kinded trait-types and many extensions; LanguageExt.Common brings in the Error type and predefined Errors.

    Prologue

    From C# 6 onwards we got the ability to treat static classes like namespaces. This means that we can use static methods without qualifying them first. That instantly gives us access to single term method names that look exactly like functions in ML-style functional languages. i.e.

        using static System.Console;
        
        WriteLine("Hello, World");

    This library tries to bring some of the functional world into C#. It won’t always sit well with the seasoned C# OO programmer, especially the choice of camelCase names for a lot of functions and the seeming ‘globalness’ of a lot of the library.

    I can understand that much of this library is non-idiomatic, but when you think of the journey C# has been on, is “idiomatic” necessarily right? A lot of C#’s idioms are inherited from Java and C# 1.0. Since then we’ve had generics, closures, Func, LINQ, async… C# as a language is becoming more and more like a functional language on every release. In fact, the bulk of the new features are either inspired by or directly taken from features in functional languages. So perhaps it’s time to move the C# idioms closer to the functional world’s idioms?

    My goal with this library is very much to create a whole new community within the larger C# community. This community is not constrained by the dogma of the past or by the norms of C#. It understands that the OOP approach to programming has some problems and tries to address them head-on.

    And for those that say “just use F#” or “just use Haskell”, sure, go do that. But it’s important to remember that C# has a lot going for it:

    • Incredible investment into a state-of-the art compiler
    • Incredible tooling (Visual Studio and Rider)
    • A large ecosystem of open-source libraries
    • A large community of developers already using it
      • This is also very important for companies that hire engineers
    • It is a functional programming language! It has first-class functions, lambdas, etc.
      • And with this library it has a functional-first Base Class Library

    A note about naming

    One of the areas that’s likely to get seasoned C# heads worked up is my choice of naming style. The intent is to try and make something that feels like a functional language rather than following rules of naming conventions (mostly set out by the BCL).

    There is, however, a naming guide that will keep you in good stead while reading through this documentation:

    • Type names are PascalCase in the normal way
    • The types all have constructor functions rather than public constructors that you instantiate with new. They will always be PascalCase:
        Option<int> x = Some(123);
        Option<int> y = None;
        Seq<int> items = Seq(1,2,3,4,5);
        List<int> items = List(1,2,3,4,5);
        HashMap<int, string> dict = HashMap((1, "Hello"), (2, "World"));
        Map<int, string> dict = Map((1, "Hello"), (2, "World"));
    • Any (non-type constructor) static function that can be used on its own by using static LanguageExt.Prelude are camelCase.
        var x = map(opt, v => v * 2);
    • Any extension methods, or anything “fluent” are PascalCase in the normal way
        var x = opt.Map(v => v * 2);

    Even if you disagree with this non-idiomatic approach, all of the camelCase static functions have fluent variants, so you never actually have to see the non-standard stuff.

    Features

    Location Feature Description
    Core IO<A> A synchronous and asynchronous side-effect: an IO monad
    Core Eff<A> A synchronous and asynchronous side-effect with error handling
    Core Eff<RT, A> Same as Eff<A> but with an injectable runtime for dependency-injection: a unit testable IO monad
    Core Pipes A clean and powerful stream processing system that lets you build and connect reusable streaming components
    Core StreamT less powerful (than Pipes), but easier to use streaming effects transformer
    Location Feature Description
    Core Atom<A> A lock-free atomically mutable reference for working with shared state
    Core Ref<A> An atomic reference to be used in the transactional memory system
    Core AtomHashMap<K, V> An immutable HashMap with a lock-free atomically mutable reference
    Core AtomSeq<A> An immutable Seq with a lock-free atomically mutable reference
    Core VectorClock<A> Understand distributed causality
    Core VersionVector<A> A vector clock with some versioned data
    Core VersionHashMap <ConflictV, K, V> Distrubuted atomic versioning of keys in a hash-map
    Location Feature Description
    Core Arr<A> Immutable array
    Core Seq<A> Lazy immutable list, evaluate at-most-once – very, very fast!
    Core Iterable<A> Wrapper around IEnumerable with support for traits – enables the higher-kinded traits to work with enumerables.
    Core Lst<A> Immutable list – use Seq over Lst unless you need InsertAt
    Core Map<K, V> Immutable map
    Core Map<OrdK, K, V> Immutable map with Ord constraint on K
    Core HashMap<K, V> Immutable hash-map
    Core HashMap<EqK, K, V> Immutable hash-map with Eq constraint on K
    Core Set<A> Immutable set
    Core Set<OrdA, A> Immutable set with Ord constraint on A
    Core HashSet<A> Immutable hash-set
    Core HashSet<EqA, A> Immutable hash-set with Eq constraint on A
    Core Que<A> Immutable queue
    Core Stck<A> Immutable stack
    Location Feature Description
    Core Option<A> Option monad
    Core OptionT<M, A> Option monad-transformer
    Core Either<L,R> Right/Left choice monad
    Core EitherT<L, M, R> Right/Left choice monad-transformer
    Core Fin<A> Error handling monad, like Either<Error, A>
    Core FinT<M, A> Error handling monad-transformer
    Core Try<A> Exception handling monad
    Core TryT<M, A> Exception handling monad-transformer
    Core Validation<FAIL ,SUCCESS> Validation applicative and monad for collecting multiple errors before aborting an operation
    Core ValidationT<FAIL, M, SUCCESS> Validation applicative and monad-transformer
    Location Feature Description
    Core Reader<E, A> Reader monad
    Core ReaderT<E, M, A> Reader monad-transformer
    Core Writer<W, A> Writer monad that logs to a W constrained to be a Monoid
    Core WriterT<W, M, A> Writer monad-transformer
    Core State<S, A> State monad
    Core StateT<S, M, A> State monad-transformer
    Location Feature Description
    Parsec Parser<A> String parser monad and full parser combinators library
    Parsec Parser<I, O> Parser monad that can work with any input stream type
    Location Feature Description
    Core Doc<A> Produce nicely formatted text with smart layouts
    Location Feature Description
    Core Patch<EqA, A> Uses patch-theory to efficiently calculate the difference (Patch.diff(list1, list2)) between two collections of A and build a patch which can be applied (Patch.apply(patch, list)) to one to make the other (think git diff).

    The traits are major feature of v5+ language-ext that makes generic programming with higher-kinds a reality. Check out Paul’s series on Higher Kinds to get a deeper insight.

    Location Feature Description
    Core Applicative<F> Applicative functor
    Core Eq<A> Ad-hoc equality trait
    Core Fallible<F> Trait that describes types that can fail
    Core Foldable<T> Aggregation over a structure
    Core Functor<F> Functor Map
    Core Has<M, TRAIT> Used in runtimes to enable DI-like capabilities
    Core Hashable<A> Ad-hoc has-a-hash-code trait
    Core Local<M, E> Creates a local environment to run a computation
    Core Monad<M> Monad trait
    Core MonadT<M, N> Monad transformer trait
    Core Monoid<A> A monoid is a type with an identity Empty and an associative binary operation +
    Core MonoidK<M> Equivalent of monoids for working on higher-kinded types
    Core Mutates<M, OUTER_STATE, INNER_STATE> Used in runtimes to enable stateful operations
    Core Ord<A> Ad-hoc ordering / comparisons
    Core Range<SELF, NumOrdA, A> Abstraction of a range of values
    Core Readable<M, Env> Generalised Reader monad abstraction
    Core Semigroup<A> Provides an associative binary operation +
    Core SemigroupK<M> Equivalent of semigroups for working with higher-kinded types
    Core Stateful<M, S> Generalised State monad abstraction
    Core Traversable<T> Traversable structures support element-wise sequencing of Applicative effects
    Core Writable<M, W> Generalised Writer monad abstraction

    These work a little like type-aliasing but they impart semantic meaning and some common operators for the underlying value.

    Location Feature Description
    Core DomainType<SELF, REPR> Provides a mapping from SELF to an underlying representation: REPR
    Core Identifier <SELF> Identifiers (like IDs in databases: PersonId for example), they are equivalent to DomaintType with equality.
    Core VectorSpace<SELF, SCALAR> Scalable values; can add and subtract self, but can only multiply and divide by a scalar. Can also negate.
    Core Amount <SELF, SCALAR> Quantities, such as the amount of money in USD on a bank account or a file size in bytes. Derives VectorSpace, IdentifierLike, DomainType, and is orderable (comparable).
    Core Locus <SELF, DISTANCE, SCALAR> Works with space-like structures. Spaces have absolute and relative distances. Has an origin/zero point and derives DomainType, IdentifierLike, AmountLike and VectorSpace. DISTANCE must also be an AmountLike<SELF, REPR, SCALAR>.

    These features are still a little in-flux as of 17th Oct 2024 – they may evolve, be renamed, or removed – but I like the idea!

    Further

    For some non-reference like documentation:

    • Paul’s blog: Notes from a Small Functional Island does deep dives into the philosophy of FP and the inner-workings of language-ext.
    • The wiki has some additional documentation, some might be a little out of date since the big v5 refactor, but should give some good insights.

    Contributing & Code of Conduct

    If you would like to get involved with this project, please first read the Contribution Guidelines and the Code of Conduct.

    Visit original content creator repository https://github.com/louthy/language-ext
  • ansible-role-functions

    Try all kinds of functions.

    GitHub GitLab Downloads Version
    github gitlab downloads Version

    This example is taken from molecule/default/converge.yml and is tested on each push, pull request and release.

    ---
    - name: Converge
      hosts: all
      become: true
      gather_facts: true
    
      roles:
        - role: robertdebock.functions

    The machine needs to be prepared. In CI this is done using molecule/default/prepare.yml:

    ---
    - name: Prepare
      hosts: all
      become: true
      gather_facts: false
    
      roles:
        - role: robertdebock.bootstrap

    Also see a full explanation and example on how to use these roles.

    The default values for the variables are set in defaults/main.yml:

    ---
    # defaults file for functions
    
    functions_strings:
      - "A regular line."
      - "CAPITALS ONLY"
      - "lowercase only"
      - " Extra spaces. "
      - "A line with the word new and old."
      - "A line with integers. 1, 2 & 3."
    
    functions_integers:
      - 0
      - 1
      - 1.4
      - 1.5
      - 1.6
      - 2.0

    The following roles are used to prepare a system. You can prepare your system in another way.

    Requirement GitHub GitLab
    robertdebock.bootstrap Build Status GitHub Build Status GitLab

    This role is a part of many compatible roles. Have a look at the documentation of these roles for further information.

    Here is an overview of related roles: dependencies

    This role has been tested on these container images:

    container tags
    Alpine all
    Amazon Candidate
    EL 9
    Debian all
    Fedora all
    Ubuntu all

    The minimum version of Ansible required is 2.12, tests have been done to:

    • The previous version.
    • The current version.
    • The development version.

    If you find issues, please register them in GitHub.

    Apache-2.0.

    robertdebock

    Please consider sponsoring me.

    Visit original content creator repository https://github.com/robertdebock/ansible-role-functions
  • PrettyWallet

    TRON-靓号采集工具

    系统说明

    • 采集区块链钱包账号跟私钥,账号过滤指定规则尾号的钱包保存

    在线文档

    https://tronapi.gitbook.io/nice

    当前文档编辑时间:2022年8月2日00:20:47

    当前系统主要是实现了靓号采集的功能,目前写的脚本支持过滤出钱包尾号为3a-8a的钱包地址,如下所示

    • THGGmadtDDVPn4mLT5TF53fSDQTFXtHHHH
    • TQRPKcfPC5tZyBvmgd1ArHrfWdAbSCgggg
    • TUrSesdmyEU2QqAKgrTNzNVGUu8fbsvvvv
    • TVVLCtcuT2KPStR8hdhWr2C8PqUgWFwwww
    • TH5AXD68VCuziS1rho8MzonVo5YZyZCCCC
    • TGkVguCJpgzztNTuy9rcmPjpzkYo7DUUUU
    • TMzVutb18XKWYJHq3Q89P2j9siSEbdCCCC
    • TUqkuDL858n6tiYEAD2K6fNKUwwCPBQQQQ

    代码截图

    image

    操作管理

    方案一:

    • 在宝塔面板开启一个url执行的脚本,也可以开启多个,然后访问地址:http://www.你的域名或ip.com/api/nice/choose?num=10000
    • 上面域名的num参数为你每次执行脚本生成的钱包地址,16G运行内存每次可以生成1w个。脚本可以多开
    • 生成的地址经过规则过滤,最后你可以在 项目根目录/public/address/对应的日期address.txt 里面看到生成的钱包地址。把文件下载到本地,改后缀名为.csv 就可以表格形式查看钱包
    • 由于账号稀有程度不同,3a的尾号钱包最容易生成,4a的一小时也能出几个往后的都是需要长时间运行脚本的概率问题了

    方案二:

    • shell 脚本执行命令
    • cd D:\somecode\lianghao-collection & php think nicev2
    • 命令行解释:cd D:\somecode\lianghao-collection cd到跟目录 执行php think nicev2 (可以配合配置,推荐宝塔配置,最方便)
    • 导入数据库
    DROP TABLE IF EXISTS `fa_address`;
    CREATE TABLE `fa_address`
    (
        `id`         int(10) NOT NULL AUTO_INCREMENT COMMENT 'ID',
        `address`    varchar(50)  DEFAULT NULL COMMENT '地址',
        `privateKey` varchar(100) DEFAULT NULL COMMENT '私钥',
        `type`       varchar(20)  DEFAULT NULL COMMENT '类型',
        `time`       int(10) DEFAULT NULL COMMENT '时间',
        `date`       varchar(50)  DEFAULT NULL COMMENT '日期',
        PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8 COMMENT='价值钱包';
    
    • shell执行成功以后,就可以在数据库看到对应的钱包地址
    • 注意事项:在database.php里面配置好数据库相关配置

    系统售卖

    • 价格:800U
    • 服务:包含部署上线,以及远程指导
    • 特色:代码开源无加密,可以支持二开,也可以补差定制二开
    • 联系方式:纸飞机(Telegram):@laowu2021
    Visit original content creator repository https://github.com/annalyciaijaz699/PrettyWallet
  • rclone-rc

    rclone-rc

    A fully type-safe TypeScript API client for Rclone’s Remote Control (RC) interface, powered by @ts-rest and Zod.

    Tested with Rclone v1.70.0

    ⚠️ Work in Progress

    This library is currently under active development. Check out the current status for a list of implemented commands.

    Consider contributing if you need a specific command:

    1. Check src/api/index.ts for current implementation
    2. Add your needed command following the same pattern
    3. Open a Pull Request

    ✨ Features

    • 🔒 Fully Type-Safe: End-to-end type safety for all API calls, including async operations
    • 📄 OpenAPI Support: Generated spec for integration with any language/client
    • 🧩 Framework Agnostic: Works with any fetch client
    • 🚀 Async Operations: First-class support for Rclone’s async operations
    • ✅ Runtime Validation: Uses Zod to validate types at runtime
    • 💪 HTTP Status Handling: Error responses handled through typed status codes

    Installation

    # Using npm
    npm install rclone-rc
    
    # Using yarn
    yarn add rclone-rc
    
    # Using pnpm
    pnpm add rclone-rc

    Usage

    Basic Client

    import { createClient } from 'rclone-rc';
    
    const api = createClient({
      baseUrl: 'http://localhost:5572',
      username: 'your-username', // Optional if running with --rc-no-auth
      password: 'your-password', // Optional if running with --rc-no-auth
    });
    
    try {
      // Get rclone version with typed response
      const { status, body } = await api.version();
    
      if (status === 200) {
        console.log('Rclone version:', body.version); // typed
      } else if (status === 500) {
        console.log('Error:', body.error); // also typed
      }
    
      // List files with type-safe parameters and response
      const files = await api.list({
        body: { fs: 'remote:path', remote: '' }
      });
    
      if (files.status === 200) {
        console.log('Files:', files.body.list);
      }
    } catch (error) {
      // Only network errors will throw exceptions
      console.error('Network error:', error);
    }

    Error Handling

    This library handles errors in two ways:

    1. HTTP Status Errors: Returned as typed responses with appropriate status codes
    2. Network Errors: Thrown as exceptions when server is unreachable

    Async Operations

    For long-running operations:

    import { createClient, createAsyncClient } from 'rclone-rc';
    
    const api = createClient({ baseUrl: 'http://localhost:5572' });
    const asyncApi = createAsyncClient({ baseUrl: 'http://localhost:5572' });
    
    try {
      // Start async job
      const job = await asyncApi.list({
        body: {
          fs: 'remote:path',
          remote: '',
          _async: true, // You need to pass this flag to the async client
        }
      });
    
      // Access job ID and check status
      const jobId = job.body.jobid;
      // Check job status using the non-async client
      const status = await api.jobStatus({ body: { jobid: jobId } });
    
      if (status.status === 200 && status.body.finished) {
        console.log('Job output:', status.body.output);
      }
    } catch (error) {
      console.error('Network error:', error);
    }

    Runtime Type Validation

    Zod validates both request and response types at runtime:

    • Request validation: Parameters, body, and query are validated before sending
    • Response validation: Can be disabled with validateResponse: false in client options

      const api = createClient({
        baseUrl: 'http://localhost:5572',
        validateResponse: false, // true by default
      });

    OpenAPI Integration

    Generate an OpenAPI specification for use with other languages and tools:

    import { generateOpenApi } from '@ts-rest/open-api';
    import { rcloneContract } from 'rclone-rc';
    
    const openApiDocument = generateOpenApi(rcloneContract, {
      info: { title: 'Rclone RC API', version: '1.0.0' }
    });

    Access the raw OpenAPI specifications at:

    Development

    pnpm install     # Install dependencies
    pnpm build       # Build the project
    pnpm test        # Run tests
    pnpm lint        # Lint code
    pnpm format      # Format code
    pnpm openapi     # Generate OpenAPI spec

    Requirements

    • Node.js 18+
    • TypeScript 5.0+

    License

    MIT

    Visit original content creator repository
    https://github.com/CodyAdam/rclone-rc

  • shiroko-kfcc

    shiroko-kfcc

    새마을금고를 이자로 털!자

    내놔

    status description
    ci main branch
    schedule daily crawling
    frontend frontend

    새마을금고의 “내가 가입한 금고 및 다른 금고의 상품을 가입할 수 있는 비대면 전용상품”은 다음과 같다. (2023/01/08 기준으로 MG더뱅킹 앱의 설명을 작성함)

    • MG더뱅킹정기예금
      • 일정금액을 납부하면 MG가 울려주는 거치식예금
      • #목돈굴리기 #12개월 #1백만원~5천만원
      • 정기예금
    • MG더뱅킹정기적금
      • 매월 일정금액을 납입하고 MG와 함께 기초자산을 모으는 적립식예금
      • #목돈모으기 #6~12개월 #최대 월 1백만원
      • 정기적금
    • MG더뱅킹자유적금
      • 자유롭게 금액을 납입하고 MG와 함께 기초자산을 모으는 적립식예금
      • #목돈모으기 #12개월 #최대 월 1백만원
      • 자유적금
    • 상상모바일통장
      • 창구 방문 없이 비대면으로 개설 가능한 입출금 통장
      • #무통장 #최초가입시 우대금리 #수수료 우대
      • 예적금이 아니라서 제외

    shiroko-kfcc는 계과 개설없이 가입 할수 있는 새마을금고 정기예금, 정기적금, 자유적금 상품을 취급한다.

    feature

    • CLI 기반 새마을금고 금리 크롤러
      • directory: ./packages/cli
    • 새마을금고 금리 frontend
      • directory: ./packages/frontend
      • MG더뱅킹정기예금, MG더뱅킹정기적금, MG더뱅킹자유적금
    • git branch로 관리되는 새마을금고 금리 데이터

    data

    금고위치안내를 크롤링해서 데이터를 구성한다.

    Visit original content creator repository https://github.com/if1live/shiroko-kfcc
  • aks-aad-integration

    aks-aad-integration

    Steps involved in creating an AKS cluster integrated with Azure Active DIrectory(AAD)

    Prequisites

    1. Azure Subcription
    2. Access to Azure AD and permissions
    3. AZ CLI installed
    4. Kubectl installed

    Create an Azure Active Directory App Registration – For AKS server

    Integrating AKS with AAD involves creating 2 AAD app registrations. One representing the server and another one for the client.

    az login

    AAD_AKS_SERVER_APP="AKSAADServerApp"

    #Create server app registration

    az ad app create --display-name=$AAD_AKS_SERVER_APP --reply-urls "https://$AAD_AKS_SERVER_APP"

    #Set the groupMembershipClaims value to All in manifest

    az ad app update --id $SERVER_APP_ID --set groupMembershipClaims=All

    Make a note of the app id returned above
    `
    SERVER_APP_ID=
    #Create a secret
    az ad app credential reset –id $SERVER_APP_ID

    #Make a note of the password in the output returned above
    SERVER_APP_PASSWORD=

    `#!/bin/bash

    ENV_SHORT_NAME=’dev’
    AAD_SCOPE=’Scope’
    AAD_ROLE=’Role’
    SERVER_APP_NAME=aksaad${ENV_SHORT_NAME}serverapp
    USER_READ_ALL_DELEGATED=’a154be20-db9c-4678-8ab7-66f6cc099a59′
    DIRECTORY_READ_ALL_DELEGATED=’06da0dbc-49e2-44d2-8312-53f166ab848a’
    DIRECTORY_READ_ALL_APPLICATION=’7ab1d382-f21e-4acd-a863-ba3e13f7da61′
    MICROSOFT_GRAPH_GUID=’00000003-0000-0000-c000-000000000000′

    az ad app create –reply-urls https://$SERVER_APP_NAME –display-name $SERVER_APP_NAME –password $SERVER_APP_PASSWORD
    SERVER_APP_ID=$
    (az ad app list –output json | jq -r –arg appname $SERVER_APP_NAME ‘.[]| select(.displayName==$appname) |.appId’)
    az ad app update –id $SERVER_APP_ID –set groupMembershipClaims=All
    az ad app permission add –id $SERVER_APP_ID –api $MICROSOFT_GRAPH_GUID –api-permissions $USER_READ_ALL_DELEGATED=$AAD_SCOPE $DIRECTORY_READ_ALL_DELEGATED=$AAD_SCOPE $DIRECTORY_READ_ALL_APPLICATION=$AAD_ROLE

    az ad app permission admin-consent –id $SERVER_APP_ID

    #Client Application

    CLIENT_APP_ID=$(az ad app create –display-name “${SERVER_APP_NAME}-Client” –native-app –reply-urls “https://${SERVER_APP_NAME}-Client” –query appId -o tsv)
    SERVER_OAUTH_PERMISSION_ID=$(az ad app show –id $SERVER_APP_ID –query “oauth2Permissions[0].id” -o tsv)

    az ad app permission add –id $CLIENT_APP_ID –api $SERVER_APP_ID –api-permissions $SERVER_OAUTH_PERMISSION_ID=Scope
    #az ad app permission grant –id $CLIENT_APP_ID –api $SERVER_APP_ID
    az ad app permission admin-consent –id $CLIENT_APP_ID

    echo server_app_id = $SERVER_APP_ID
    echo server_app_secret = $SERVER_APP_PASSWORD
    echo client_app_id = $CLIENT_APP_ID

    az aks create -g aks-cluster-resgrp -n hari-aks –aad-server-app-id $SERVER_APP_ID –aad-server-app-secret $SERVER_APP_PASSWORD –aad-client-app-id $CLIENT_APP_ID –node-count 1 –location northeurope -k 1.15.7 -a monitoring -a http_application_routing

    Visit original content creator repository
    https://github.com/haripraghash/aks-aad-integration

  • matryoshka-mm

    🌋 M3: Matryoshka Multimodal Models

    Learning multi-granularities visual tokens a coarse-to-fine nested way
    Mu Cai, Jianwei Yang, Jianfeng Gao , Yong Jae Lee

    Proceedings of the International Conference on Learning Representations (ICLR), 2025

    [Paper] [Project Page] [Demo] [Model Zoo]

    Release

    • [6/3] 🔥 All training (llava-1.5-m3) and evaluations (llava-1.5-m3 and llava-next-m3) code are release.
    • [5/27] 🔥 We released Matryoshka Multimodal Models. We propose to learn visual tokens in a nested manner following a coarse-to-fine order. Checkout the paper and demo.

    The fundamental implementation of M3 can be found in this code snippet.

    Code License Usage and License Notices: This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. Llama community license for LLaMA-2 and Vicuna-v1.5). This project does not impose any additional constraints beyond those stipulated in the original licenses. Furthermore, users are reminded to ensure that their use of the dataset and checkpoints is in compliance with all applicable laws and regulations.

    Contents

    Install

    If you are not using Linux, do NOT proceed, see instructions for macOS and Windows.

    1. Clone this repository and navigate to LLaVA folder
    git clone https://github.com/mu-cai/matryoshka-mm.git
    cd matryoshka-mm
    1. Install Package
    conda create -n matryoshka-mm python=3.10 -y
    conda activate matryoshka-mm
    pip install --upgrade pip  # enable PEP 660 support
    pip install -e .
    1. Install additional packages for training cases
    pip install -e ".[train]"
    pip install flash-attn --no-build-isolation
    

    Quick Start With HuggingFace

    Example Code
    from llava.model.builder import load_pretrained_model
    from llava.mm_utils import get_model_name_from_path
    from llava.eval.run_llava import eval_model
    
    model_path = "mucai/llava-next-vicuna-7b-m3"
    
    tokenizer, model, image_processor, context_len = load_pretrained_model(
        model_path=model_path,
        model_base=None,
        model_name=get_model_name_from_path(model_path)
    )

    Check out the details wth the load_pretrained_model function in llava/model/builder.py.

    You can also use the eval_model function in llava/eval/run_llava.py to get the output easily. By doing so, you can use this code on Colab directly after downloading this repository.

    model_path = "mucai/llava-next-vicuna-7b-m3"
    prompt = "What are the things I should be cautious about when I visit here?"
    image_file = "https://llava-vl.github.io/static/images/view.jpg"
    
    args = type('Args', (), {
        "model_path": model_path,
        "model_base": None,
        "model_name": get_model_name_from_path(model_path),
        "query": prompt,
        "conv_mode": None,
        "image_file": image_file,
        "sep": ",",
        "temperature": 0,
        "top_p": None,
        "num_beams": 1,
        "max_new_tokens": 512,
        "matryoshka_vis_token_scale": 576,
    })()
    
    eval_model(args)

    M3 Weights

    Please check out our Model Zoo for all public M3 checkpoints, and the instructions of how to use the weights.

    Demo

    Gradio Web UI

    To launch a Gradio demo locally, please run the following commands one by one. If you plan to launch multiple model workers to compare between different checkpoints, you only need to launch the controller and the web server ONCE.

    flowchart BT
        %% Declare Nodes
        gws("Gradio (UI Server)")
        c("Controller (API Server):<br/>PORT: 10000")
        mw7b("Model Worker:<br/>llava-next-vicuna-7b-m3<br/>PORT: 40000")
        mw13b("Model Worker:<br/>llava-next-vicuna-7b-m3<br/>PORT: 40001")
        sglw13b("Backend:<br/>llava-v1.5-7b-m3<br/>http://localhost:30000")
        lsglw13b("Worker:<br/>lllava-v1.5-7b-m3<<br/>PORT: 40002")
    
        %% Declare Styles
        classDef data fill:#3af,stroke:#48a,stroke-width:2px,color:#444
        classDef success fill:#8f8,stroke:#0a0,stroke-width:2px,color:#444
        classDef failure fill:#f88,stroke:#f00,stroke-width:2px,color:#444
    
        %% Assign Styles
        class id,od data;
        class cimg,cs_s,scsim_s success;
        class ncimg,cs_f,scsim_f failure;
    
        subgraph Demo Connections
            direction BT
            c<-->gws
            
            mw7b<-->c
            mw13b<-->c
            lsglw13b<-->c
            sglw13b<-->lsglw13b
        end
    
    Loading

    Launch a controller

    python -m llava.serve.controller --host 0.0.0.0 --port 30000

    Launch a gradio web server.

    python -m llava.serve.gradio_web_server --controller http://localhost:30000 --model-list-mode reload

    You just launched the Gradio web interface. Now, you can open the web interface with the URL printed on the screen. You may notice that there is no model in the model list. Do not worry, as we have not launched any model worker yet. It will be automatically updated when you launch a model worker.

    Launch a model worker

    This is the actual worker that performs the inference on the GPU. Each worker is responsible for a single model specified in --model-path.

    python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:30000 --port 40000 --worker http://localhost:40000 --model-path mucai/llava-next-vicuna-7b-m3

    Wait until the process finishes loading the model and you see “Uvicorn running on …”. Now, refresh your Gradio web UI, and you will see the model you just launched in the model list.

    You can launch as many workers as you want, and compare between different model checkpoints in the same Gradio interface. Please keep the --controller the same, and modify the --port and --worker to a different port number for each worker.

    python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:30000 --port <different from 40000, say 40001> --worker http://localhost:<change accordingly, i.e. 40001> --model-path <ckpt2>

    If you are using an Apple device with an M1 or M2 chip, you can specify the mps device by using the --device flag: --device mps.

    Launch a model worker (Multiple GPUs, when GPU VRAM <= 24GB)

    If the VRAM of your GPU is less than 24GB (e.g., RTX 3090, RTX 4090, etc.), you may try running it with multiple GPUs. Our latest code base will automatically try to use multiple GPUs if you have more than one GPU. You can specify which GPUs to use with CUDA_VISIBLE_DEVICES. Below is an example of running with the first two GPUs.

    CUDA_VISIBLE_DEVICES=0,1 python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:30000 --port 40000 --worker http://localhost:40000 --model-path mucai/llava-next-vicuna-7b-m3

    Launch a model worker (4-bit, 8-bit inference, quantized)

    You can launch the model worker with quantized bits (4-bit, 8-bit), which allows you to run the inference with reduced GPU memory footprint, potentially allowing you to run on a GPU with as few as 12GB VRAM. Note that inference with quantized bits may not be as accurate as the full-precision model. Simply append --load-4bit or --load-8bit to the model worker command that you are executing. Below is an example of running with 4-bit quantization.

    python -m llava.serve.model_worker --host 0.0.0.0 --controller http://localhost:30000 --port 40000 --worker http://localhost:40000 --model-path mucai/llava-next-vicuna-7b-m3 --load-4bit

    Launch a model worker (LoRA weights, unmerged)

    You can train and launch the model worker with LoRA weights using our instructions here..

    CLI Inference

    Chat about images using LLaVA without the need of Gradio interface. It also supports multiple GPUs, 4-bit and 8-bit quantized inference. With 4-bit quantization, for our LLaVA-1.5-7B, it uses less than 8GB VRAM on a single GPU.

    python -m llava.serve.cli \
        --model-path mucai/llava-next-vicuna-7b-m3 \
        --image-file "https://llava-vl.github.io/static/images/view.jpg" \
        --matryoshka_vis_token_scale 576 \
        --load-4bit

    Train (with LLaVA-1.5)

    M3 finetunes LLaVA checkpoints using the exact same visual instruction data.

    LLaVA is trained on 8 H100 GPUs with 80GB memory. To train on fewer GPUs, you can reduce the per_device_train_batch_size and increase the gradient_accumulation_steps accordingly. Always keep the global batch size the same: per_device_train_batch_size x gradient_accumulation_steps x num_gpus.

    Hyperparameters

    We use the exact same hyperparameters as LLaVA in finetuning. Hyperparameters used are provided below.

    Hyperparameter Global Batch Size Learning rate Epochs Max length Weight decay
    LLaVA-v1.5-7B-M3 128 2e-5 1 2048 0

    Download Vicuna checkpoints (automatically)

    Our base model Vicuna v1.5, which is an instruction-tuned chatbot, will be downloaded automatically when you run our provided training scripts. No action is needed.

    M3 Visual Instruction Tuning

    1. Prepare data

    Please download the annotation of the final mixture our instruction tuning data llava_v1_5_mix665k.json, and download the images from constituting datasets:

    After downloading all of them, organize the data as follows in ./playground/data,

    ├── coco
    │   └── train2017
    ├── gqa
    │   └── images
    ├── ocr_vqa
    │   └── images
    ├── textvqa
    │   └── train_images
    └── vg
        ├── VG_100K
        └── VG_100K_2
    
    1. Start training!

    You may download our pretrained projectors in Model Zoo. It is not recommended to use legacy projectors, as they may be trained with a different version of the codebase, and if any option is off, the model will not function/train as we expected.

    Training script with DeepSpeed ZeRO-3: finetune.sh.

    If you are do not have enough GPU memory:

    • Use LoRA: finetune_lora.sh. We are able to fit 13B training in 8-A100-40G/8-A6000, and 7B training in 8-RTX3090. Make sure per_device_train_batch_size*gradient_accumulation_steps is the same as the provided script for best reproducibility.
    • Replace zero3.json with zero3_offload.json which offloads some parameters to CPU RAM. This slows down the training speed.

    If you are interested in finetuning M3 model to your own task/data, please check out Finetune_Custom_Data.md

    Evaluation

    We use the same benchmark as LLaVA-1.5 and LLaVA-Next

    For LLaVA-1.5, see Evaluation.md.

    For LLaVA-NeXT on image understanding, see lmms-eval.

    For LLaVA-NeXT on video understanding, see IG-VLM.

    Citation

    If you find LLaVA useful for your research and applications, please cite using this BibTeX:

    @article{cai2024matryoshka,
      title={Matryoshka Multimodal Models},
      author={Cai, Mu and Yang, Jianwei and Gao, Jianfeng and Lee, Yong Jae},
      journal={Proceedings of the International Conference on Learning Representation},
      year={2025}
    }

    Acknowledgement

    • Vicuna: the langauge model we built upon, and our base model Vicuna-13B that has the amazing language capabilities!

    • LLaVa: the codebase we built upon, which has amazing multimodal abalities!

    Related Projects

    Visit original content creator repository https://github.com/mu-cai/matryoshka-mm
  • Online_shopping_-system

    Online_shopping_-system

    Online shopping is a form of electronic commerce which allows consumers to directly buy goods or services from a seller over the Internet using a web browser or a mobile app.

    The Online Shopping System in PHP using XAMPP as virtual Server.

    This project contains the admin side and user side where a user can view shopping items details, sign up, and buy products online. While the admin can add items and users, products, manage them, and soon.

    Talking about the features of this system, the admin can manage the users, products, and check subscribers. While the user can shop for all the available shopping items by signing in. And, in order to buy products online, he/she has to sign up/in through the system.

    The user can shop for multiple items and pay online through cards. This simple project is similar to the online shop portal. The design of this project is very simple so that the user won’t find any difficulties while working on it.

    Flow Chart:

    image

    Images of the Webpage:

    Homepage:

    page1

    page2

    Product Checkout:

    page3

    Payment Gateway:

    page4

    Admin Page:

    page5

    page6

    Video for Presentation of Online Shopping Site(Ecommerce)

    Online_shopping_system.mp4
    Visit original content creator repository https://github.com/Surya2Developer/Online_shopping_-system