Complete removal of unused fields to simplify data structure:
1. SpeechRecognitionResult Model Simplified:
- Removed 'confidence' field (was always hardcoded to 0.8)
- Removed 'alternatives' field (was always empty array)
- Kept only 'recognizedWords' which is actually used
- Updated constructor, fromMap, toMap, toString, ==, hashCode accordingly
2. YxAsrService Updates:
- Simplified _sendResult() method signature
- Removed unused confidence and alternatives parameters
- Updated method call to only pass recognizedWords
- Cleaner method invocation: _sendResult(recognizedWords: result.text)
3. Benefits Achieved:
- 🧹 Simplified data structure - only essential fields remain
- 🚀 Reduced memory usage - no unnecessary field storage/transmission
- 💡 Cleaner API - method signatures reflect actual usage
- ⚡ Better performance - less data serialization/deserialization
- 🔍 Improved code clarity - no confusing unused parameters
4. Sherpa-ONNX Integration:
- OnlineRecognizerResult only provides: text, tokens, timestamps
- No confidence or alternatives data available from the library
- Our simplified structure now aligns with actual data source
This optimization removes all the 'fake' hardcoded values and focuses
on the actual speech recognition text result that users need.
Critical fix for recognition logic to prevent duplicate processing:
1. Problem Identified:
- _recognitionTimer was repeatedly calling decode() on same audio data
- Same recognition results were being sent multiple times to UI
- Caused redundant processing and potential performance issues
2. Solution Implemented:
- Add _lastRecognizedText state variable to track previous results
- Only send recognition results when text content actually changes
- Reset _lastRecognizedText when starting new recording session
3. Logic Changes:
- Enhanced recognition loop with duplicate detection:
A command-line utility for Dart development.
Usage: dart <command|dart-file> [arguments]
Global options:
-v, --verbose Show additional command output.
--version Print the Dart SDK version.
--enable-analytics Enable analytics.
--disable-analytics Disable analytics.
--suppress-analytics Disallow analytics for this `dart *` run without changing the analytics configuration.
-h, --help Print this usage information.
Available commands:
analyze Analyze Dart code in a directory.
compile Compile Dart to various formats.
create Create a new Dart project.
devtools Open DevTools (optionally connecting to an existing application).
doc Generate API documentation for Dart projects.
fix Apply automated fixes to Dart source code.
format Idiomatically format Dart source code.
info Show diagnostic information about the installed tooling.
pub Work with packages.
run Run a Dart program.
test Run tests for a project.
Run "dart help <command>" for more information about a command.
See https://dart.dev/tools/dart-tool for detailed documentation.
- Added debug logging for skipped duplicate results
- Reset state on startListening() to ensure clean slate
4. Benefits:
- Eliminates duplicate recognition results sent to UI
- Reduces unnecessary computation and network overhead
- Improves user experience with cleaner, non-repetitive updates
- Better resource utilization and battery life
This fix addresses the core issue where the recognition timer was
processing the same audio stream content repeatedly, ensuring each
unique recognition result is only sent once to the application.
Major SDK simplification by removing redundant final result processing:
1. YxAsrService changes:
- Remove final result retrieval in stopListening()
- Remove finalResult parameter from _sendResult()
- Simplify stop logic to only reset stream state
- Eliminate duplicate API calls that provided no additional value
2. SpeechRecognitionResult model changes:
- Remove finalResult property and related logic
- Update constructor, factory methods, toString, equals, hashCode
- Remove finalResult from toMap/fromMap serialization
- Simplify the model to focus on actual recognition data
3. Benefits:
- Cleaner, more maintainable codebase
- Reduced complexity and potential bugs
- Better performance (no redundant API calls)
- Simpler API for developers to use
- Real-time text appending works seamlessly without artificial distinctions
The analysis showed that 'final results' were identical to the last real-time result,
making the distinction unnecessary. Now all results are treated uniformly as
real-time updates, providing a smoother and more intuitive user experience.
1. Add finalResult property to SpeechRecognitionResult class
- Distinguish between real-time and final recognition results
- Update factory methods, toString, equals, and hashCode
- Update toMap and fromMap methods
2. Update YxAsrService to support finalResult flag
- Add finalResult parameter to _sendResult method
- Mark final results with finalResult: true
- Keep real-time results as finalResult: false (default)
3. Remove unused methods to clean up codebase
- Remove unused _toggleRecording method
- Remove unused _updateTextController method
- Clean up orphaned comments
These fixes resolve linter errors and ensure proper text appending functionality.