Tags: classification*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. Researchers have categorized these states based on three main approaches: the nature of the experience itself (state-based), the method of induction (method-based), and underlying neurophysiological mechanisms (neuro/physio-based). Current research focuses on identifying overlapping phenomenological features across different ASCs, aiming to improve nuanced conceptualization and measurement, particularly for potential clinical applications like psychedelic-assisted psychotherapy.

    - Altered states of consciousness (ASC) have been classified along different criteria.
    - State-based schemes use features of subjective experience for the classification.
    - Method-based schemes distinguish how or by which means an ASC is induced.
    - Neuro/Physio-based schemes detail biological mechanisms.
    - Across state-based schemes we extracted terms that suggest key subjective features of ASCs. A clustering analysis revealed eight core features of ASCs.
  2. This notebook provides an introduction to Naive Bayes classification, covering concepts, formulas, and implementation.
  3. This article discusses how to apply vision language models (VLMs) to document understanding, covering application areas like agentic use cases, question answering, classification, and information extraction, as well as limitations like cost and processing long documents.
  4. A deep dive into advanced evaluation for data scientists, discussing why accuracy is often misleading and exploring alternative metrics for classification and regression tasks like ROC-AUC, Log Loss, R², RMSLE, and Quantile Loss.
  5. The article discusses using Large Language Model (LLM) embeddings as features in traditional machine learning models built with scikit-learn. It covers the process of generating embeddings from text data using models like Sentence Transformers, and how these embeddings can be combined with existing features to improve model performance. It details practical steps including loading data, creating embeddings, and integrating them into a scikit-learn pipeline for tasks like classification.
  6. This page details the topic namers available in Turftopic, allowing automated assignment of human-readable names to topics. It covers Large Language Models (local and OpenAI), N-gram patterns, and provides API references for the `TopicNamer`, `LLMTopicNamer`, `OpenAITopicNamer`, and `NgramTopicNamer` classes.
  7. Python tutorial for reproducible labeling of cutting-edge topic models with GPT4-o-mini. The article details training a FASTopic model and labeling its results using GPT-4.0 mini, emphasizing reproducibility and control over the labeling process.
  8. Multi-class zero-shot embedding classification and error checking. This project improves zero-shot image/text classification using a novel dimensionality reduction technique and pairwise comparison, resulting in increased agreement between text and image classifications.
  9. This article demonstrates how to use the attention mechanism in a time series classification framework, specifically for classifying normal sine waves versus 'modified' (flattened) sine waves. It details the data generation, model implementation (using a bidirectional LSTM with attention), and results, achieving high accuracy.
  10. This article discusses the use of variational autoencoders (VAEs) to generate synthetic data as a solution to the impending data scarcity for training large language models. It explores how synthetic data can address issues like imbalanced datasets, particularly using the UCI Adult dataset, by generating synthetic samples to balance the dataset and improve classification accuracy.

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "classification"

About - Propulsed by SemanticScuttle