0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
This study demonstrates that neural activity in the human brain aligns linearly with the internal contextual embeddings of speech and language within large language models (LLMs) as they process everyday conversations.
This tutorial demonstrates how to build a powerful document search engine using Hugging Face embeddings, Chroma DB, and Langchain for semantic search capabilities.
This paper introduces a multi-agent NLP framework to address prompt injection vulnerabilities in generative AI systems. The framework utilizes specialized agents for generating responses, sanitizing outputs, and enforcing policy compliance, evaluated using novel metrics like Injection Success Rate (ISR), Policy Override Frequency (POF), Prompt Sanitization Rate (PSR), and Compliance Consistency Score (CCS). The system employs OVON for inter-agent communication.
A flexible Python library and CLI tool for interacting with Model Context Protocol (MCP) servers using OpenAI, Anthropic, and Ollama models.
The attention mechanism in Large Language Models (LLMs) helps derive the meaning of a word from its context. This involves encoding words as multi-dimensional vectors, calculating query and key vectors, and using attention weights to adjust the embedding based on contextual relevance.
Qodo-Embed-1-1.5B is a state-of-the-art code embedding model designed for retrieval tasks in the software development domain. It supports multiple programming languages and is optimized for natural language-to-code and code-to-code retrieval, making it highly effective for applications such as code search and retrieval-augmented generation.
Qodo releases Qodo-Embed-1-1.5B, an open-source code embedding model that outperforms competitors from OpenAI and Salesforce, enhancing code search, retrieval, and understanding for enterprise development teams.
This article provides a comprehensive guide on the basics of BERT (Bidirectional Encoder Representations from Transformers) models. It covers the architecture, use cases, and practical implementations, helping readers understand how to leverage BERT for natural language processing tasks.
An explanation of the differences between encoder- and decoder-style large language model (LLM) architectures, including their roles in tasks such as classification, text generation, and translation.
A detailed guide on creating a text classification model with Hugging Face's transformer models, including setup, training, and evaluation steps.
First / Previous / Next / Last
/ Page 1 of 0